I think there’s an ambiguity between “realism” in the sense of “these statements I’m making are answers to a well-formed question and have a truth value” and “morality is a transcendent ineffable stuff floating out there which compels all agents to obey and could make murder right by having a different state”.
Yes—and the important thing to remember is that the second view, which all of us here agree is silly, is the naive, common-sense human view. It’s what people are automatically going to think you’re talking about if you go around shouting “Yes Virginia, there are moral facts after all!”
Meanwhile, the general public has a term for the view that you and I share: they call it “moral relativism”.
I don’t recall exactly, and I haven’t yet bothered to look it up, but I believe when you first introduced your metaethics, there were people (myself among them, I think), who objected, not to your actual meta-ethical views, but to the way that you vigorously denied that you were a “relativist”; and you misunderstood them/us as objecting to your theory itself (I think you maybe even threw in an accusation of not comprehending the logical subtleties of Loeb’s Theorem).
What makes the theory relativist is simply the fact that it refers explicitly to particular agents—humans. Thus, it is automatically subject to the “chauvinism” objection with respect to e.g. Babyeaters: we prefer one thing, they prefer another—why should we do what we prefer rather than what they prefer? The correct answer is, of course, “because that’s what we prefer”. But people find that answer unpalatable—and one reason they might is because it would seem to imply that different human cultures should similarly run right over each other if they don’t think they share the same values. Now, we may not like the term “relativism”, but it seems to me that this “chauvinism” objection is one that you (and I) need to take at least somewhat seriously.
Yes—and the important thing to remember is that the second view, which all of us here agree is silly, is the naive, common-sense human view.
No, it’s not. The naive, common-sense human view is that sneaking into Jane’s tent while she’s not there and stealing her water-gourd is “wrong”. People don’t end up talking about transcendent ineffable stuff until they have pursued bad philosophy for a considerable length of time. And the conclusion—that you can make murder right without changing the murder itself but by changing a sort of ineffable stuff that makes the murder wrong—is one that, once the implications are put baldly, squarely disagrees with naive moralism. It is an attempt to rescue a naive misunderstanding of the subject matter of mind and ontology, at the expense of naive morality.
What makes the theory relativist is simply the fact that it refers explicitly to particular agents—humans
why should we do what we prefer rather than what they prefer? The correct answer is, of course, “because that’s what we prefer”.
See above. The correct answer is “Because children shouldn’t die, they should live and be happy and have fun.” Note the lack of any reference to humans—this is the sort of logical fact that humans find compelling, but it is not a logical fact about humans. It is a physical fact that I find that logic compelling, but this physical fact is not, itself, the sort of fact that I find compelling.
This is the part of the problem which I find myself unable to explain well to the LessWrongians who self-identify as moral non-realists. It is, admittedly, more subtle than the point about there not being transcendent ineffable stuff, but still, there is a further point and y’all don’t seem to be getting it...
I agree that this constitutes relativism, and deny that I am a relativist.
It looks to me like the opposing position is not based on disagreement with this point but rather outright failure to understand what is being said.
I have the same feeling, from the other direction.
I feel like I completely understand the error you’re warning against in No License To Be Human; if I’m making a mistake, it’s not that one. I totally get that “right”, as you use it, is a rigid designator; if you changed humans, that wouldn’t change what’s right. Fine. The fact remains, however, that “right” is a highly specific, information-theoretically complex computation. You have to look in a specific, narrow region of computation-space to find it. This is what makes you vulnerable to the chauvinism charge; there are lots of other computations that you didn’t decide to single out and call “right”, and the question is: why not? What makes this one so special? The answer is that you looked at human brains, as they happen to be constituted, and said, “This is a nice thing we’ve got going here; let’s preserve it.”
Yes, of course that doesn’t constitute a general license to look at the brains of whatever species you happen to be a member of to decide what’s “right”; if the Babyeaters or Pebblesorters did this, they’d get the wrong answer. But that doesn’t change the fact that there’s no way to convince Babyeaters or Pebblesorters to be interested in “rightness” rather than babyeating or primaility. It is this lack of a totally-neutral, agent-independent persuasion route that is responsible for the fundamentally relative nature of morality.
And yes, of course, it’s a mistake to expect to find any argument that would convince every mind, or an ideal philosopher of perfect emptiness—that’s why moral realism is a mistake!
I promise to take it seriously if you need to refer to Löb’s theorem in your response. I once understood your cartoon guide and could again if need be.
If we concede that when people say “wrong”, they’re referring to the output of a particular function to which we don’t have direct access, doesn’t the problem still arise when we ask how to identify what function that is? In order to pin down what it is that we’re looking for, in order to get any information about it, we have to interview human subjects. Out of all the possible judgment-specifying functions out there, what’s special about this one is precisely the relationship humans have with it.
Yes—and the important thing to remember is that the second view, which all of us here agree is silly, is the naive, common-sense human view. It’s what people are automatically going to think you’re talking about if you go around shouting “Yes Virginia, there are moral facts after all!”
Agreed that this is important. (ETA: I now think Eliezer is right about this.)
Meanwhile, the general public has a term for the view that you and I share: they call it “moral relativism”.
We believe (a) that there is no separable essence of goodness, but also (b) that there are moral facts that people can be wrong about. I think the general public understands “moral relativism” to exclude (b), and I don’t think there’s any short term in common (not philosophical) usage that includes the conjunction of (a) and (b).
What makes the theory relativist is simply the fact that it refers explicitly to particular agents—humans.
Eliezer doesn’t define morality in terms of humans; he defines it (as I understand) in terms of an objective computation that happens to be instantiated by humans. See No License to be Human.
We believe (a) that there is no separable essence of goodness, but also (b) that there are moral facts that people can be wrong about. I think the general public understands “moral relativism” to exclude (b)
I think that’s uncharitable to the public: surely everyone should admit that people can be mistaken, on occasion, about what they themselves think. A view that holds that nothing that comes out of a person’s mouth can ever be wrong is scarcely worth discussing.
Eliezer doesn’t define morality in terms of humans; he defines it (as I understand) in terms of an objective computation that happens to be instantiated by humans.
The fact that this computation just so happens to be instantiated by humans and nothing else in the known universe cannot be a coincidence; surely there’s a causal relation between humans’ instantiating the computation and Eliezer’s referring to it.
surely there’s a causal relation between humans’ instantiating the computation and Eliezer’s referring to it.
Of course there’s a causal relation which explains the causal fact of this reference, but this causal explanation is not the same as the moral justification, and it’s not appealed to as the moral justification. We shouldn’t save babies because-morally it’s the human thing to do but because-morally it’s the right thing to do. What physically causes us to save the babies is a combination of the logical fact that saving babies is the right thing to do, and the physical fact that we are compelled by those sorts of logical facts. What makes saving the baby the right thing to do is a logical fact about the subject matter of rightness—in this case, a pretty fast and primitive implication from the premises that are baked into that subject matter and which distinguish it from the subject matter of wrongness. The physical fact that humans are compelled by these sorts of logical facts is not one of the facts which makes saving the baby the right thing to do. If I did assert that this physical fact was involved, I would be a moral relativist and I would say the sorts of other things that moral relativists say, like “If we wanted to eat babies, then that would be the right thing to do.”
The physical fact that humans are compelled by these sorts of logical facts is not one of the facts which makes saving the baby the right thing to do. If I did assert that this physical fact was involved, I would be a moral relativist and I would say the sorts of other things that moral relativists say, like “If we wanted to eat babies, then that would be the right thing to do.”
The moral relativist who says that doesn’t really disagree with you. The moral relativist considers a different property of algorithms to be the one that determines whether an algorithm is a morality, but this is largely a matter of definition.
For the relativist, an algorithm is a morality when it is a logic that compels an agent (in the limit of reflection, etc.). For you, an algorithm is a morality when it is the logic that in fact compels human agents (in the limit of reflection, etc.). That is why your view is a kind of relativism. You just say “morality” where other relativists would say “the morality that humans in fact have”.
You also seem more optimistic than most relativists that all non-mutant humans implement very nearly the same compulsive logic. But other relativists admit that this is a real possibility, and they wouldn’t take it to mean that they were wrong to be relativists.
If there is an advantage to the relativists’ use of “morality”, it is that their use doesn’t prejudge the question of whether all humans implement the same compulsive logic.
Right, so a moral relativist is a kind of moral absolutist who believes that the One True Moral Rule is that you must do what is the collective moral will of the species you’re part of.
I agree that it seems as though I just don’t understand. Sometimes, I feel perched on the edge of understanding, feel a little dizzy, and decide I don’t understand.
I don’t claim to be representative in any way, but my stumbling block seems to be this idea about how saving babies is right. Since I don’t feel strongly that saving babies is “right”, whenever you write, “saving babies is the right thing to do”, I translate this as, “X is the right thing to do” where X is something that is right, whatever that might mean. I leave that as a variable to see if it gets answered later.
Then you write, “What makes saving the baby the right thing to do is a logical fact about the subject matter of rightness—in this case, a pretty fast and primitive implication from the premises that are baked into that subject matter and which distinguish it from the subject matter of wrongness.”
How is wrongness or rightness baked into a subject matter?
Of course there’s a causal relation which explains the causal fact of this reference, but this causal explanation is not the same as the moral justification, and it’s not appealed to as the moral justification
Of course it isn’t, because we’re doing meta-ethics here, and don’t yet have access to the notion of “moral justification”; we’re in the process of deciding which kinds of things will be used as “moral justification”.
It’s your metamorality that is human-dependent, not your morality; see my other comment.
Now I’m confused. I don’t understand how you can have preferences that you use to decide what ought to count as a “moral justification” without already having a moral reference frame.
Since we don’t have conscious access to our premises, and we haven’t finished reflecting on them, we sometimes go around studying our own conclusions in an effort to discover what counts as a moral justification, but that’s not like a philosopher of pure emptiness constructing justificationness from scratch and appeal to some mysterious higher criterion. (Bearing in mind that when someone offers me a higher criterion, it usually ends up looking pretty uninteresting.)
I don’t understand how you can have preferences that you use to decide what ought to count as a “moral justification” without already having a moral reference frame.
Well, consider an analogy from mathematical logic: when you write out a formal proof that 2+2 = 4, at some point in the process, you’ll end up concatenating two symbols here and two symbols there to produce four symbols; but this doesn’t mean you’re appealing to the conclusion you’re trying to prove in your proof; it just so happens that your ability to produce the proof depends on the truth of the proposition.
Similarly, when an AI with Morality programmed into it computes the correct action, it just follows the Morality algorithm directly, which doesn’t necessarily refer explicitly to “humans” as such. But human programmers had to program the Morality algorithm into the AI in the first place; and the reason they did so is because they themselves were running something related to the Morality algorithm in their own brains. That, as you know, doesn’t imply that the AI itself is appealing to “human values” in its actual computation (the Morality program need not make such a reference); but it does imply that the meta-ethical theory used by the programmers compelled them to (in an appropriate sense) look at their own brains to decide what to program into the AI.
I truly and honestly say to you, Roko, that while you got most of my points, maybe even 75% of my points, there seems to be a remaining point that is genuinely completely lost on you. And a number of other people. It is a difficult point. People here are making fun of my attempt to explain it using an analogy to Lob’s Theorem, as if that was the sort of thing I did on a whim, or because of being stupid. But… my dear audience… really, by this point, you ought to be giving me the benefit of the doubt about that sort of thing.
Also, it appears from the comment posted below and earlier that this mysterious missed point is accessible to, for example, Nick Tarleton.
It looks to me like the opposing position is not based on disagreement with this point but rather outright failure to understand what is being said.
Well, you did make a claim about what is the right translation when speaking to babyeaters:
we and they are talking about a different subject matter and it is an error of the computer translation programs that the word comes out as “morality” in both cases. Morality is about how to save babies, not eat them, everyone knows that and they happen to be right. If we could get past difficulties of the translation, the babyeaters would agree with us about what is moral, we would agree with them about what is babyeating
But there has to be some standard by which you prefer the explanation “we mistranslated the term ‘morality’” to “we disagree about morality”, right? What is that? Presumably, one could make your argument about any two languages, not just ones with a species gap:
“We and Spaniards are talking about a different subject matter and it is an error of the computer translation programs that the word comes out as “morality” in both cases. Morality is about how to protect freedoms, not restrict them, everyone knows that and they happen to be right. If we could get past difficulties of the translation, the Spaniards would agree with us about what is moral, we would agree with them about what is familydutyhonoring.”
ETA: A lot of positive response to this, but let me add that I think a better term in the last place would be something like “morality-to-Spaniards”. The intuition behind the original phrasing was to show how you can redefine Spanish standards of morality to be “not-morality”, but rather, just “things that we place different priority on”.
But it’s clearly absurd there: the correct translation of ética is not “ethics-to-Spaniards”, but rather, just plain old “ethics”. And the same reasoning should apply to the babyeather case.
To go a step further, moral disagreement doesn’t require a language barrier at all.
“We and abolitionists are talking about a different subject matter and it is an error of the “computer translation programs” that the word comes out as “morality” in both cases. Morality is about how to create a proper relationship between races, everyone knows that and they happen to be right. If we could get past difficulties of the “translation”, the abolitionists would agree with us about what is moral, we would agree with them about what is abolitionism.”
Also, it appears from the comment posted below and earlier that this mysterious missed point is accessible to, for example, Nick Tarleton.
he defines it (as I understand) in terms of an objective computation that happens to be instantiated by humans
No, I understand that your long list wouldn’t change if humanity itself changed, that if I altered every human to like eating babies, that wouldn’t make babyeating right, in the Eliezer world.
“the rightness computation that humanity just happens to instantiate” is different from “whatever computation humanity instantiates”
Because Eliezer made the ingenious move of redefining “should” to mean “do what we prefer”.
He doesn’t define it indexically (like this) or in terms of humans; as I understand it, he defines it in terms of an objective computation that happens to be instantiated by humans (No License to be Human).
As I understand it, relativism doesn’t mean “refers explicitly to particular agents”. Suppose there’s a morality-determining function that takes an agent’s terminal values and their psychology/physiology and spits out what that agent should do. It would spit different things out for different agents, and even more different things for different kinds of agents (humans vs babyeaters). Nevertheless, this would not quite be moral relativism because it would still be the case that there’s an objective morality-determining function that is to be applied to determine what one should do. Moral relativism would not merely say that there’s no one right way one should act, it would also say that there’s no one right way to determine how one should act.
It’s not objective, because it’s results differ with differing terminal values. An objective morality machine would tell you what you should do, not tell you how to satisfy your values. Iow, morality isn’t decision theory.
An objective morality machine would tell you what you should do, not tell you how to satisfy your values
Why must the two be mutually exclusive? Why can’t morality be about satisfying your values? One could say that morality properly understood is nothing more than the output of decision theory, or that outputs of decision theory that fall in a certain area labeled “moral questions” are morality.
Why can’t morality be about satisfying your values?
Because that isn’t how the term “morality” is typically used by humans. The “morality police” found in certain Islamic countries aren’t life coaches. The Ten Commandments aren’t conditional statements. When people complain about the decaying moral fabric of society, they’re not talking about a decline in introspective ability.
Inherent to the concept of morality is the external imposition of values. (Not just decisions, because they also want you to obey the rules when they’re not looking, you see?) Sociologically speaking, morality is a system for getting people to do unfun things by threatening ostracization.
Decision theory (and meta-decision-theory etc.) does not exist to analyze this concept (which is not designed for agents); it exists to replace it.
Morality done right is about the voluntary and mutual adjustment of values ( or rather actions expressing them).
Morally done wrong can go two ways, one failure mode is hedonism, where the individual takes no notice of the preferences of others:; the other is authoritarianism, where “society” (rather, its representatives) imposes values that no-one likes or has a say in.
Because that isn’t how the term “morality” is typically used by humans. The “morality police” found in certain Islamic countries aren’t life coaches. The Ten Commandments aren’t conditional statements. … Inherent to the concept of morality is the external imposition of values.
Morality is about all of these things. and more besides. Although “outer” morality as embodied in moral codes and moral exemplars is definitely important, if there were no inner values for humans to care about in the first place, no one would be going around and imposing them on others, or even debating them in any way.
And it is a fact about the world that most basic moral values are shared among human societies. Morality may or may not be objective, but it is definitely intersubjective in a way that looks ‘objective’ to the casual observer.
“Morality” is used by humans in unclear ways and I don’t know how much can be gained from looking at common usage. It’s more sensible to look at philosophical ethical theories rather than folk morality—and there you’ll find that moral internalism and ethical egoism are within the realm of possible moralities.
An objective morality machine would tell you the One True Objective Thing TheAncientGeek Should Do, given your values, but this thing need not be the same as The One True Objective Thing Blacktrance Should Do. The calculations it performs are the same in both cases (which is what makes it objective), but the outputs are different.
You are misusing “objective”. How does your usage differ from telling me what i should do subjectively? How can.true-for-me-but-not-for-you clauses fail to indicate subjectivity? How cam it be coherent to say there is one truth, only it is different for everybody?
Subjectivity is multiple truths about one thing, ie multiple claims about one thing, which are indexed to individuals, and which would be contradictory without the indexing.
In this discussion, I understand there to be three positions:
There is one objectively measurable value system.
There is an objectively measurable value system for each agent.
There are not objectively measurable value systems.
The ‘objective’ and ‘subjective’ distinction is not particularly useful for this discussion, because it confuses the separation between ‘measurable’ and ‘unmeasurable’ (1+2 vs. 3) and ‘universal’ and ‘particular’ (1 vs. 2+3).
But even ‘universal’ and ‘particular’ are not quite the right words- Clippy’s particular preference for paperclips is one that Clippy would like to enforce on the entire universe.
No one holds 3. 1 is ambiguous; it depends on whether we’re speaking “in character” or not. If we are, then it follows from 2 (“there is one objectively measurable value system, namely mine”).
The trouble with Eliezer’s “metaethics” sequence is that it’s written in character (as a human), and something called “metaethics” shouldn’t be.
[edit to expand]: I think that when a cognitivist claims “I’m not a relativist,” they need to have a position like 3 to identify as relativism. Perhaps it is an overreach to use ‘value system’ instead of ‘morality’ in the description of 3, which was a choice driven more by my allergy to the word ‘morality’ than to be correct or communicative.
1 is ambiguous; it depends on whether we’re speaking “in character” or not. If we are, then it follows from 2 (“there is one objectively measurable value system, namely mine”).
One could be certain that God’s morality is correct, but be uncertain what God’s morality is.
The trouble with Eliezer’s “metaethics” sequence is that it’s written in character (as a human), and something called “metaethics” shouldn’t be.
Yes. He’s has strong intuitions that his own moral intuitions are really true, combined with strong intuitions that,morality is this very localized .human thing,, that doesn’t exist elsewhere. So he defines morality as what humans.think morality is...what I dont know isn’t knowledge.
The trouble with Eliezer’s “metaethics” sequence is that it’s written in character (as a human), and something called “metaethics” shouldn’t be.
People always write in character. If you try to use some different definition of “morality” than normal for talking about metaethics, you’ll reach the wrong conclusions because, y’know, you’re quite literally not talking about morality any more.
Language is different from metalanguage, even if both are (in) English.
You shouldn’t be using any definition of “morality” when talking about metaethics, because on that level the definition of “morality” isn’t fixed; that’s what makes it meta.
I can’t make sense of that. Isn’t the whole point of metaethics to create an account of what this morality stuff is (if it’s anything at all) and how the word “morality” manages to refer to it? If metaethics wasn’t about morality it wouldn’t be called metaethics, it would be called, I dunno, “decision theory” or something.
And if it is about morality, it’s unclear how you’re supposed to refer to the subject matter (morality) without saying “morality”. Or the other subject matter (the word “morality”) to which you fail to refer if you start talking about a made-up word that’s also spelled “m o r a l i t y” but isn’t the word people actually use.
My complaint about the sequence is that it should have been about the orthogonality thesis, but instead ended up being about rigid designation.
I remember it as being about both. (exhibit 1, exhibit 2. The latter was written before EY had heard of rigid designators, though. It could probably be improved these days.)
In one sense, this is trivial. I have to take you into account when I do something to you, just like I have to take rocks into account when I do something to them. You’re part of a state of the world. (It may be the case that after taking rocks into account, it doesn’t affect my decision in any way. But my decision can still be formulated as taking rocks into account.)
In another sense, whether I should take your well-being into account depends on my values. If I’m Clippy, then I shouldn’t. If I’m me, then I should.
Otherwise you are using morality to mean hedonism.
Hedonism makes action-guiding claims about what you should do, so it’s a form of morality, but it doesn’t by itself mean that I shouldn’t take you into account—it only means that I should take your well-being into account instrumentally, to the degree it gives me pleasure. Also, the fulfillment of one’s values is not synonymous with hedonism. A being incapable of experiencing pleasure, such as some form of Clippy, has values but acting to fulfill them would not be hedonism.
Whether or not or you morally-should take me into account does not depend on your values, it depends on what the correct theory of morality is. “Should” is not an unambiguous term with a free variable for ” to whom”. It is an ambiguous term, and morally-should is not hedonistically-should, is not practically-should....etc.
If the correct theory of morality is that morally-should is the same as practically-should, then “whether or not you morally-should take me into account does not depend on your values” is false.
Saying it’s true-for-me-but-not-for-you conflates two very different things: truth being agent-relative and descriptive statements about agents being true or false depending on the agent they’re referring to. “X is 6 feet tall” is true when X is someone who’s 6 feet tall and false when X is someone who’s 4 feet tall, and in neither case is it subjective, even though the truth-value depends on who X is. Morality is similar—“X is the right thing for TheAncientGeek to do” is an objectively true (or false) statement, regardless of who’s evaluating you. Encountering “X is the right thing to do if you’re Person A and the wrong thing to do if you’re Person B” and thinking moralitry subjective is the same sort of mistake as if you encountered the statement “Person A is 6 feet tall and Person B is not 6 feet tall” and concluded that height is subjective.
It may well, but that’ is a less interesting and comtentious claim. It’s fairly widely accepted that the sum total of ethi.cs is inferrable from (supervenes on) the sum total of facts.
Morality is similar—“X is the right thing for TheAncientGeek to do” is an objectively true (or false) statement, regardless of who’s evaluating you.
Not so! Rather, “X is the right thing for TheAncientGeek to do given TheAncientGeek’s values” is an objectively true (or false) statement. But “X is the right thing for TheAncientGeek to do” tout court is not; it depends on a specific value system being implicitly understood.
“X is the right thing for TheAncientGeek to do” is synonymous with “X is the right thing for TheAncientGeek to do according to his (reflectively consistent) values”. You may not want him to act in accordance with his values, but that doesn’t change the fact that he should—much like in the standard analysis of the prisoner’s dilemma, each prisoner wants the other to cooperate, but has to admit that each of them should defect.
Same mistake, Only actions that affect others are morally relevant, from which it follows that rightness cannot be evaluated from one person’s values alone.
Maximizing ones values solipsitically is hedonism, not morality.
What makes the theory relativist is simply the fact that it refers explicitly to particular agents—humans.
Unfortunately, it’s not that easy. An agent, given by itself, doesn’t determine preference. It probably does so to a large extent, but not entirely. There is no subject matter of “preference” in general. “Human preference” is already a specific question that someone has to state, that doesn’t magically appear from a given “human”. A “human” might only help (I hope) to pinpoint the question precisely, if you start in the general ballpark of what you’d want to ask.
I suspect that “Vague statement of human preference”+”human” is enough to get a question of “human preference”, and the method of using the agent’s algorithm is general enough for e.g. “Vague statement of human preference”+”babyeater” to get a precise question of “babyeater preference”, but it’s not a given, and isn’t even expected to “work” for more alien agents, who are compelled by completely different kinds of questions (not that you’d have a way of recognizing such “error”).
The reference to humans or babyeaters is in the method of constructing a preference-implementing machine, not in the concept itself. What humans are is not the info that compels you to define human preference in a particular way, although what humans are may be used as a tool in the definition of human preference, simply because you can pull the right levers and point to the chunks of info that go into the definition you choose.
[W]hy should we do what we prefer rather than what they prefer? The correct answer is, of course, “because that’s what we prefer”
That’s not a justification. They may turn out to do something right, where you were mistaken, and you’ll be compelled to correct.
re: denying claim without addressing argument
IMO, such comments are acceptable when the commenter is of high enough status in the community. Obviously I’d prefer they address the argument, but I consider myself better off just knowing that certain people agree or disagree.
ADDED: Note, I am merely stating my personal preference, not insisting that my personal preference become normatively binding on LW. I also happen to agree with Komponisto’s judgment that Unknowns previous comment was unhelpful.
ETA: Note that an implication of what you said is that replying in that manner constitutes an assertion of higher status than the other person; this is exactly why it is irritating.
I think assertions of higher status can sometimes be characterized as justifiable or even desirable. Eliezer does this all the time. The alternative to “stating disagreement while failing to address the details of the argument,” is often to ignore the comment altogether. (Also, see edit to my previous comment before replying further.)
Well, if you agree with me about that particular comment, maybe it would have been preferable to wait for an occasion where you actually disagreed with my judgment to make this point?
(This would help cut down on “fake disagreements”, i.e. disagreements arising out of misunderstanding.)
I think the manner in which komponisto was calling Eliezer a moral relativist deserves a more thorough answer. If I make an off-handed remark and someone disagrees with me, I find an off-handed remark fair. If I spend three paragraphs and get, “No,” as a response I will be annoyed.
Yes—and the important thing to remember is that the second view, which all of us here agree is silly, is the naive, common-sense human view. It’s what people are automatically going to think you’re talking about if you go around shouting “Yes Virginia, there are moral facts after all!”
Meanwhile, the general public has a term for the view that you and I share: they call it “moral relativism”.
I don’t recall exactly, and I haven’t yet bothered to look it up, but I believe when you first introduced your metaethics, there were people (myself among them, I think), who objected, not to your actual meta-ethical views, but to the way that you vigorously denied that you were a “relativist”; and you misunderstood them/us as objecting to your theory itself (I think you maybe even threw in an accusation of not comprehending the logical subtleties of Loeb’s Theorem).
What makes the theory relativist is simply the fact that it refers explicitly to particular agents—humans. Thus, it is automatically subject to the “chauvinism” objection with respect to e.g. Babyeaters: we prefer one thing, they prefer another—why should we do what we prefer rather than what they prefer? The correct answer is, of course, “because that’s what we prefer”. But people find that answer unpalatable—and one reason they might is because it would seem to imply that different human cultures should similarly run right over each other if they don’t think they share the same values. Now, we may not like the term “relativism”, but it seems to me that this “chauvinism” objection is one that you (and I) need to take at least somewhat seriously.
No, it’s not. The naive, common-sense human view is that sneaking into Jane’s tent while she’s not there and stealing her water-gourd is “wrong”. People don’t end up talking about transcendent ineffable stuff until they have pursued bad philosophy for a considerable length of time. And the conclusion—that you can make murder right without changing the murder itself but by changing a sort of ineffable stuff that makes the murder wrong—is one that, once the implications are put baldly, squarely disagrees with naive moralism. It is an attempt to rescue a naive misunderstanding of the subject matter of mind and ontology, at the expense of naive morality.
I agree that this constitutes relativism, and deny that I am a relativist.
See above. The correct answer is “Because children shouldn’t die, they should live and be happy and have fun.” Note the lack of any reference to humans—this is the sort of logical fact that humans find compelling, but it is not a logical fact about humans. It is a physical fact that I find that logic compelling, but this physical fact is not, itself, the sort of fact that I find compelling.
This is the part of the problem which I find myself unable to explain well to the LessWrongians who self-identify as moral non-realists. It is, admittedly, more subtle than the point about there not being transcendent ineffable stuff, but still, there is a further point and y’all don’t seem to be getting it...
I have the same feeling, from the other direction.
I feel like I completely understand the error you’re warning against in No License To Be Human; if I’m making a mistake, it’s not that one. I totally get that “right”, as you use it, is a rigid designator; if you changed humans, that wouldn’t change what’s right. Fine. The fact remains, however, that “right” is a highly specific, information-theoretically complex computation. You have to look in a specific, narrow region of computation-space to find it. This is what makes you vulnerable to the chauvinism charge; there are lots of other computations that you didn’t decide to single out and call “right”, and the question is: why not? What makes this one so special? The answer is that you looked at human brains, as they happen to be constituted, and said, “This is a nice thing we’ve got going here; let’s preserve it.”
Yes, of course that doesn’t constitute a general license to look at the brains of whatever species you happen to be a member of to decide what’s “right”; if the Babyeaters or Pebblesorters did this, they’d get the wrong answer. But that doesn’t change the fact that there’s no way to convince Babyeaters or Pebblesorters to be interested in “rightness” rather than babyeating or primaility. It is this lack of a totally-neutral, agent-independent persuasion route that is responsible for the fundamentally relative nature of morality.
And yes, of course, it’s a mistake to expect to find any argument that would convince every mind, or an ideal philosopher of perfect emptiness—that’s why moral realism is a mistake!
I promise to take it seriously if you need to refer to Löb’s theorem in your response. I once understood your cartoon guide and could again if need be.
If we concede that when people say “wrong”, they’re referring to the output of a particular function to which we don’t have direct access, doesn’t the problem still arise when we ask how to identify what function that is? In order to pin down what it is that we’re looking for, in order to get any information about it, we have to interview human subjects. Out of all the possible judgment-specifying functions out there, what’s special about this one is precisely the relationship humans have with it.
Agreed that this is important. (ETA: I now think Eliezer is right about this.)
We believe (a) that there is no separable essence of goodness, but also (b) that there are moral facts that people can be wrong about. I think the general public understands “moral relativism” to exclude (b), and I don’t think there’s any short term in common (not philosophical) usage that includes the conjunction of (a) and (b).
Eliezer doesn’t define morality in terms of humans; he defines it (as I understand) in terms of an objective computation that happens to be instantiated by humans. See No License to be Human.
I think that’s uncharitable to the public: surely everyone should admit that people can be mistaken, on occasion, about what they themselves think. A view that holds that nothing that comes out of a person’s mouth can ever be wrong is scarcely worth discussing.
The fact that this computation just so happens to be instantiated by humans and nothing else in the known universe cannot be a coincidence; surely there’s a causal relation between humans’ instantiating the computation and Eliezer’s referring to it.
This is far from uncontroversial in the general population.
Of course there’s a causal relation which explains the causal fact of this reference, but this causal explanation is not the same as the moral justification, and it’s not appealed to as the moral justification. We shouldn’t save babies because-morally it’s the human thing to do but because-morally it’s the right thing to do. What physically causes us to save the babies is a combination of the logical fact that saving babies is the right thing to do, and the physical fact that we are compelled by those sorts of logical facts. What makes saving the baby the right thing to do is a logical fact about the subject matter of rightness—in this case, a pretty fast and primitive implication from the premises that are baked into that subject matter and which distinguish it from the subject matter of wrongness. The physical fact that humans are compelled by these sorts of logical facts is not one of the facts which makes saving the baby the right thing to do. If I did assert that this physical fact was involved, I would be a moral relativist and I would say the sorts of other things that moral relativists say, like “If we wanted to eat babies, then that would be the right thing to do.”
The moral relativist who says that doesn’t really disagree with you. The moral relativist considers a different property of algorithms to be the one that determines whether an algorithm is a morality, but this is largely a matter of definition.
For the relativist, an algorithm is a morality when it is a logic that compels an agent (in the limit of reflection, etc.). For you, an algorithm is a morality when it is the logic that in fact compels human agents (in the limit of reflection, etc.). That is why your view is a kind of relativism. You just say “morality” where other relativists would say “the morality that humans in fact have”.
You also seem more optimistic than most relativists that all non-mutant humans implement very nearly the same compulsive logic. But other relativists admit that this is a real possibility, and they wouldn’t take it to mean that they were wrong to be relativists.
If there is an advantage to the relativists’ use of “morality”, it is that their use doesn’t prejudge the question of whether all humans implement the same compulsive logic.
I agree with this comment and feel that it offers strong points against Eliezer’s way of talking about this issue.
Right, so a moral relativist is a kind of moral absolutist who believes that the One True Moral Rule is that you must do what is the collective moral will of the species you’re part of.
Yup, and so long as I’m going to be a moral absolutist anyway, why be that sort of moral absolutist?
I agree that it seems as though I just don’t understand. Sometimes, I feel perched on the edge of understanding, feel a little dizzy, and decide I don’t understand.
I don’t claim to be representative in any way, but my stumbling block seems to be this idea about how saving babies is right. Since I don’t feel strongly that saving babies is “right”, whenever you write, “saving babies is the right thing to do”, I translate this as, “X is the right thing to do” where X is something that is right, whatever that might mean. I leave that as a variable to see if it gets answered later.
Then you write, “What makes saving the baby the right thing to do is a logical fact about the subject matter of rightness—in this case, a pretty fast and primitive implication from the premises that are baked into that subject matter and which distinguish it from the subject matter of wrongness.”
How is wrongness or rightness baked into a subject matter?
Of course it isn’t, because we’re doing meta-ethics here, and don’t yet have access to the notion of “moral justification”; we’re in the process of deciding which kinds of things will be used as “moral justification”.
It’s your metamorality that is human-dependent, not your morality; see my other comment.
Now I’m confused. I don’t understand how you can have preferences that you use to decide what ought to count as a “moral justification” without already having a moral reference frame.
Since we don’t have conscious access to our premises, and we haven’t finished reflecting on them, we sometimes go around studying our own conclusions in an effort to discover what counts as a moral justification, but that’s not like a philosopher of pure emptiness constructing justificationness from scratch and appeal to some mysterious higher criterion. (Bearing in mind that when someone offers me a higher criterion, it usually ends up looking pretty uninteresting.)
Well, consider an analogy from mathematical logic: when you write out a formal proof that 2+2 = 4, at some point in the process, you’ll end up concatenating two symbols here and two symbols there to produce four symbols; but this doesn’t mean you’re appealing to the conclusion you’re trying to prove in your proof; it just so happens that your ability to produce the proof depends on the truth of the proposition.
Similarly, when an AI with Morality programmed into it computes the correct action, it just follows the Morality algorithm directly, which doesn’t necessarily refer explicitly to “humans” as such. But human programmers had to program the Morality algorithm into the AI in the first place; and the reason they did so is because they themselves were running something related to the Morality algorithm in their own brains. That, as you know, doesn’t imply that the AI itself is appealing to “human values” in its actual computation (the Morality program need not make such a reference); but it does imply that the meta-ethical theory used by the programmers compelled them to (in an appropriate sense) look at their own brains to decide what to program into the AI.
That would be epistemic preferences. It’s epistemology (and allied fields, like logic and rationality) thatreally runs into circularity problems.
Because Eliezer made the ingenious move of redefining “should” to mean “do what we prefer”.
It is an internally consistent way of using language, it is just somewhat unique.
I truly and honestly say to you, Roko, that while you got most of my points, maybe even 75% of my points, there seems to be a remaining point that is genuinely completely lost on you. And a number of other people. It is a difficult point. People here are making fun of my attempt to explain it using an analogy to Lob’s Theorem, as if that was the sort of thing I did on a whim, or because of being stupid. But… my dear audience… really, by this point, you ought to be giving me the benefit of the doubt about that sort of thing.
Also, it appears from the comment posted below and earlier that this mysterious missed point is accessible to, for example, Nick Tarleton.
It looks to me like the opposing position is not based on disagreement with this point but rather outright failure to understand what is being said.
Well, you did make a claim about what is the right translation when speaking to babyeaters:
But there has to be some standard by which you prefer the explanation “we mistranslated the term ‘morality’” to “we disagree about morality”, right? What is that? Presumably, one could make your argument about any two languages, not just ones with a species gap:
“We and Spaniards are talking about a different subject matter and it is an error of the computer translation programs that the word comes out as “morality” in both cases. Morality is about how to protect freedoms, not restrict them, everyone knows that and they happen to be right. If we could get past difficulties of the translation, the Spaniards would agree with us about what is moral, we would agree with them about what is familydutyhonoring.”
ETA: A lot of positive response to this, but let me add that I think a better term in the last place would be something like “morality-to-Spaniards”. The intuition behind the original phrasing was to show how you can redefine Spanish standards of morality to be “not-morality”, but rather, just “things that we place different priority on”.
But it’s clearly absurd there: the correct translation of ética is not “ethics-to-Spaniards”, but rather, just plain old “ethics”. And the same reasoning should apply to the babyeather case.
To go a step further, moral disagreement doesn’t require a language barrier at all.
“We and abolitionists are talking about a different subject matter and it is an error of the “computer translation programs” that the word comes out as “morality” in both cases. Morality is about how to create a proper relationship between races, everyone knows that and they happen to be right. If we could get past difficulties of the “translation”, the abolitionists would agree with us about what is moral, we would agree with them about what is abolitionism.”
No, I understand that your long list wouldn’t change if humanity itself changed, that if I altered every human to like eating babies, that wouldn’t make babyeating right, in the Eliezer world.
“the rightness computation that humanity just happens to instantiate” is different from “whatever computation humanity instantiates”
Is there something else I am missing?
Eliezer uses the word “should” in what seems to me to be a weird and highly counter-intuitive way.
Multiple people have advised him about this—but he seems to like his usage.
He doesn’t define it indexically (like this) or in terms of humans; as I understand it, he defines it in terms of an objective computation that happens to be instantiated by humans (No License to be Human).
As I understand it, relativism doesn’t mean “refers explicitly to particular agents”. Suppose there’s a morality-determining function that takes an agent’s terminal values and their psychology/physiology and spits out what that agent should do. It would spit different things out for different agents, and even more different things for different kinds of agents (humans vs babyeaters). Nevertheless, this would not quite be moral relativism because it would still be the case that there’s an objective morality-determining function that is to be applied to determine what one should do. Moral relativism would not merely say that there’s no one right way one should act, it would also say that there’s no one right way to determine how one should act.
It’s not objective, because it’s results differ with differing terminal values. An objective morality machine would tell you what you should do, not tell you how to satisfy your values. Iow, morality isn’t decision theory.
Why must the two be mutually exclusive? Why can’t morality be about satisfying your values? One could say that morality properly understood is nothing more than the output of decision theory, or that outputs of decision theory that fall in a certain area labeled “moral questions” are morality.
Because that isn’t how the term “morality” is typically used by humans. The “morality police” found in certain Islamic countries aren’t life coaches. The Ten Commandments aren’t conditional statements. When people complain about the decaying moral fabric of society, they’re not talking about a decline in introspective ability.
Inherent to the concept of morality is the external imposition of values. (Not just decisions, because they also want you to obey the rules when they’re not looking, you see?) Sociologically speaking, morality is a system for getting people to do unfun things by threatening ostracization.
Decision theory (and meta-decision-theory etc.) does not exist to analyze this concept (which is not designed for agents); it exists to replace it.
Morality done right is about the voluntary and mutual adjustment of values ( or rather actions expressing them).
Morally done wrong can go two ways, one failure mode is hedonism, where the individual takes no notice of the preferences of others:; the other is authoritarianism, where “society” (rather, its representatives) imposes values that no-one likes or has a say in.
Morality is about all of these things. and more besides. Although “outer” morality as embodied in moral codes and moral exemplars is definitely important, if there were no inner values for humans to care about in the first place, no one would be going around and imposing them on others, or even debating them in any way.
And it is a fact about the world that most basic moral values are shared among human societies. Morality may or may not be objective, but it is definitely intersubjective in a way that looks ‘objective’ to the casual observer.
“Morality” is used by humans in unclear ways and I don’t know how much can be gained from looking at common usage. It’s more sensible to look at philosophical ethical theories rather than folk morality—and there you’ll find that moral internalism and ethical egoism are within the realm of possible moralities.
Note the word objective.
An objective morality machine would tell you the One True Objective Thing TheAncientGeek Should Do, given your values, but this thing need not be the same as The One True Objective Thing Blacktrance Should Do. The calculations it performs are the same in both cases (which is what makes it objective), but the outputs are different.
You are misusing “objective”. How does your usage differ from telling me what i should do subjectively? How can.true-for-me-but-not-for-you clauses fail to indicate subjectivity? How cam it be coherent to say there is one truth, only it is different for everybody?
A person’s height is objectively measurable; that does not mean all people have the same height.
“True about person P” is objective.
“True for person P about X” is subjective.
Subjectivity is multiple truths about one thing, ie multiple claims about one thing, which are indexed to individuals, and which would be contradictory without the indexing.
In this discussion, I understand there to be three positions:
There is one objectively measurable value system.
There is an objectively measurable value system for each agent.
There are not objectively measurable value systems.
The ‘objective’ and ‘subjective’ distinction is not particularly useful for this discussion, because it confuses the separation between ‘measurable’ and ‘unmeasurable’ (1+2 vs. 3) and ‘universal’ and ‘particular’ (1 vs. 2+3).
But even ‘universal’ and ‘particular’ are not quite the right words- Clippy’s particular preference for paperclips is one that Clippy would like to enforce on the entire universe.
No one holds 3. 1 is ambiguous; it depends on whether we’re speaking “in character” or not. If we are, then it follows from 2 (“there is one objectively measurable value system, namely mine”).
The trouble with Eliezer’s “metaethics” sequence is that it’s written in character (as a human), and something called “metaethics” shouldn’t be.
It is not obvious to me that this is the case.
[edit to expand]: I think that when a cognitivist claims “I’m not a relativist,” they need to have a position like 3 to identify as relativism. Perhaps it is an overreach to use ‘value system’ instead of ‘morality’ in the description of 3, which was a choice driven more by my allergy to the word ‘morality’ than to be correct or communicative.
One could be certain that God’s morality is correct, but be uncertain what God’s morality is.
I agree with this assessment.
Yes. He’s has strong intuitions that his own moral intuitions are really true, combined with strong intuitions that,morality is this very localized .human thing,, that doesn’t exist elsewhere. So he defines morality as what humans.think morality is...what I dont know isn’t knowledge.
People always write in character. If you try to use some different definition of “morality” than normal for talking about metaethics, you’ll reach the wrong conclusions because, y’know, you’re quite literally not talking about morality any more.
Language is different from metalanguage, even if both are (in) English.
You shouldn’t be using any definition of “morality” when talking about metaethics, because on that level the definition of “morality” isn’t fixed; that’s what makes it meta.
My complaint about the sequence is that it should have been about the orthogonality thesis, but instead ended up being about rigid designation.
You should use a definition, but one that doesn’t beg the question.
I can’t make sense of that. Isn’t the whole point of metaethics to create an account of what this morality stuff is (if it’s anything at all) and how the word “morality” manages to refer to it? If metaethics wasn’t about morality it wouldn’t be called metaethics, it would be called, I dunno, “decision theory” or something.
And if it is about morality, it’s unclear how you’re supposed to refer to the subject matter (morality) without saying “morality”. Or the other subject matter (the word “morality”) to which you fail to refer if you start talking about a made-up word that’s also spelled “m o r a l i t y” but isn’t the word people actually use.
I remember it as being about both. (exhibit 1, exhibit 2. The latter was written before EY had heard of rigid designators, though. It could probably be improved these days.)
Agreed. What I should do is a separate thing from what you should do, even though they’re the same type of thing and may be similar in many ways.
What you morally should do to me has to take me into account, and vice versa. Otherwise you are using morality to mean hedonism.
In one sense, this is trivial. I have to take you into account when I do something to you, just like I have to take rocks into account when I do something to them. You’re part of a state of the world. (It may be the case that after taking rocks into account, it doesn’t affect my decision in any way. But my decision can still be formulated as taking rocks into account.)
In another sense, whether I should take your well-being into account depends on my values. If I’m Clippy, then I shouldn’t. If I’m me, then I should.
Hedonism makes action-guiding claims about what you should do, so it’s a form of morality, but it doesn’t by itself mean that I shouldn’t take you into account—it only means that I should take your well-being into account instrumentally, to the degree it gives me pleasure. Also, the fulfillment of one’s values is not synonymous with hedonism. A being incapable of experiencing pleasure, such as some form of Clippy, has values but acting to fulfill them would not be hedonism.
Whether or not or you morally-should take me into account does not depend on your values, it depends on what the correct theory of morality is. “Should” is not an unambiguous term with a free variable for ” to whom”. It is an ambiguous term, and morally-should is not hedonistically-should, is not practically-should....etc.
Unless the correct theory of morality is that morally-should is the same thing as practically-should, in which case it would depend on your values.
A sentence beginning “unless the correct theory is” does not refute a sentence including ” depends on what the correct theory ”....
If the correct theory of morality is that morally-should is the same as practically-should, then “whether or not you morally-should take me into account does not depend on your values” is false.
Whether or not morality depends on your values depends on what the correct theory of morality is.
Saying it’s true-for-me-but-not-for-you conflates two very different things: truth being agent-relative and descriptive statements about agents being true or false depending on the agent they’re referring to. “X is 6 feet tall” is true when X is someone who’s 6 feet tall and false when X is someone who’s 4 feet tall, and in neither case is it subjective, even though the truth-value depends on who X is. Morality is similar—“X is the right thing for TheAncientGeek to do” is an objectively true (or false) statement, regardless of who’s evaluating you. Encountering “X is the right thing to do if you’re Person A and the wrong thing to do if you’re Person B” and thinking moralitry subjective is the same sort of mistake as if you encountered the statement “Person A is 6 feet tall and Person B is not 6 feet tall” and concluded that height is subjective.
See my other reply.
Indexing statements about individuals to individuals is harmless. Subjectivity comes in when you index statements about something else to individuals.
Morally relevant actions are actions which potentially affect others
Your morality machine is subjective because I don’t need to feed in anyone else’s preferences, even though my actions will affect them.
Other people’s preferences are part of states of the world, and states of the world are fed into the machine.
Not part of the original spec!!!
Fair enough. In that case, the machine would tell you something like “Find out expected states of the world. If it’s A, do X. If it’s B, do Y”.
It may well, but that’ is a less interesting and comtentious claim. It’s fairly widely accepted that the sum total of ethi.cs is inferrable from (supervenes on) the sum total of facts.
Not so! Rather, “X is the right thing for TheAncientGeek to do given TheAncientGeek’s values” is an objectively true (or false) statement. But “X is the right thing for TheAncientGeek to do” tout court is not; it depends on a specific value system being implicitly understood.
“X is the right thing for TheAncientGeek to do” is synonymous with “X is the right thing for TheAncientGeek to do according to his (reflectively consistent) values”. You may not want him to act in accordance with his values, but that doesn’t change the fact that he should—much like in the standard analysis of the prisoner’s dilemma, each prisoner wants the other to cooperate, but has to admit that each of them should defect.
Same mistake, Only actions that affect others are morally relevant, from which it follows that rightness cannot be evaluated from one person’s values alone.
Maximizing ones values solipsitically is hedonism, not morality.
Notice I didn’t use the term “morality” in the grandparent. Cf. my other comment.
But the umpteenth grandparent was explicitly about morality.
Unfortunately, it’s not that easy. An agent, given by itself, doesn’t determine preference. It probably does so to a large extent, but not entirely. There is no subject matter of “preference” in general. “Human preference” is already a specific question that someone has to state, that doesn’t magically appear from a given “human”. A “human” might only help (I hope) to pinpoint the question precisely, if you start in the general ballpark of what you’d want to ask.
I suspect that “Vague statement of human preference”+”human” is enough to get a question of “human preference”, and the method of using the agent’s algorithm is general enough for e.g. “Vague statement of human preference”+”babyeater” to get a precise question of “babyeater preference”, but it’s not a given, and isn’t even expected to “work” for more alien agents, who are compelled by completely different kinds of questions (not that you’d have a way of recognizing such “error”).
The reference to humans or babyeaters is in the method of constructing a preference-implementing machine, not in the concept itself. What humans are is not the info that compels you to define human preference in a particular way, although what humans are may be used as a tool in the definition of human preference, simply because you can pull the right levers and point to the chunks of info that go into the definition you choose.
That’s not a justification. They may turn out to do something right, where you were mistaken, and you’ll be compelled to correct.
Yes.
As it is commonly understood, Eliezer is definitely NOT a moral relativist.
(Downvoted for denying my claim without addressing my argument. That’s very annoying.)
re: denying claim without addressing argument IMO, such comments are acceptable when the commenter is of high enough status in the community. Obviously I’d prefer they address the argument, but I consider myself better off just knowing that certain people agree or disagree.
ADDED: Note, I am merely stating my personal preference, not insisting that my personal preference become normatively binding on LW. I also happen to agree with Komponisto’s judgment that Unknowns previous comment was unhelpful.
I disagree.
ETA: Note that an implication of what you said is that replying in that manner constitutes an assertion of higher status than the other person; this is exactly why it is irritating.
I think assertions of higher status can sometimes be characterized as justifiable or even desirable. Eliezer does this all the time. The alternative to “stating disagreement while failing to address the details of the argument,” is often to ignore the comment altogether. (Also, see edit to my previous comment before replying further.)
Well, if you agree with me about that particular comment, maybe it would have been preferable to wait for an occasion where you actually disagreed with my judgment to make this point?
(This would help cut down on “fake disagreements”, i.e. disagreements arising out of misunderstanding.)
Agreed.
I think the manner in which komponisto was calling Eliezer a moral relativist deserves a more thorough answer. If I make an off-handed remark and someone disagrees with me, I find an off-handed remark fair. If I spend three paragraphs and get, “No,” as a response I will be annoyed.
In this case, I side with komponisto.
Not ndividual level relativism, or not group level relativism?
As I understand the common understanding, moral relativist commonly means not believing in absolute morality, which I think is pretty much all of us.