(namely, whatever terminal values the speaker happens to hold, on some appropriate [if somewhat mysterious] idealization).
(i) ‘Right’ means, roughly, ‘promotes external goods X, Y and Z’
(ii) claim i above is true because I desire X, Y, and Z.
People really think EY is saying this? It looks to me like a basic Egoist stance, where “your values” also include your moral preferences. That is my position, but I don’t think EY is on board.
“Shut up and multiply” implies a symmetry in value between different people that isn’t implied by the above. Similarly, the diversion into mathematical idealization seemed like a maneuver toward Objective Morality—One Algorithm to Bind Them, One Algorithm to Rule them All. Everyone gets their own algorithm as the standard of right and wrong? Fantastic, if it were true, but that’s not how I read EY.
It’s strange, because Richard seems to say that EY agrees with me, while I think EY agrees with him.
I think you are mixing up object-level ethics and metaethics here. You seem to be contrasting an Egoist position (“everyone should do what they want”) with an impersonal utilitarian one (“everyone should do what is good for everyone, shutting up and multiplying”). But the dispute is about what “should”, “right” and related words mean, not about what should be done.
Eliezer (in Richard’s interpretation) says that when someone says “Action A is right” (or “should be done”), the meaning of this is roughly “A promotes ultimate goals XYZ”. Here XYZ is in fact the outcome of a complicated computation based from of the speaker’s state of mind, which can be translated roughly as “the speaker’s terminal values” (for example, for a sincere philanthropist XYZ might be “everyone gets joy, happiness, freedom, etc”). But the fact that XYZ are the speaker’s terminal values is not part of the meaning of “right”, so it is not inconsistent for someone to say “Everyone should promote XYZ, even if they don’t want it” (e.g. “Babyeaters should not eat babies”). And needless to say, XYZ might include generalized utilitarian values like “everyone gets their preferences satisfied”, in which case impersonal, shut-up-and-multiply utilitarianism is what is needed to make actual decisions for concrete cases.
But the dispute is about what “should”, “right” and related words mean, not about what should be done.
Of course it’s about both. You can define labels in any way you like. In the end, your definition better be useful for communicating concepts with other people, or it’s not a good definition.
Let’s define “yummy”. I put food in my mouth. Taste buds fire, neural impulses propagate fro neuron to neuron, and eventually my mind evaluates how yummy it is. Similar events happen for you. Your taste buds fire, your neural impulses propagate, and your mind evaluates how yummy it is. Your taste buds are not mine, and your neural networks are not mine, so your response and my response are not identical. If I make a definition of “yummy” that entails that what you find yummy is not in fact yummy, I’ve created a definition that is useless for dealing with the reality of what you find yummy.
From my inside view of yummy, of course you’re just wrong if you think root beer isn’t yummy—I taste root beer, and it is yummy. But being a conceptual creature, I have more than the inside view, I have an outside view as well, of you, and him, and her, and ultimately of me too. So when I talk about yummy with other people, I recognize that their inside view is not identical to mine, and so use a definition based on the outside view, so that we can actually be talking about the same thing, instead of throwing our differing inside views at each other.
Discussion with the inside view: “Let’s get root beer.” “What? Root beer sucks!” “Root beer is yummy!” “Is not!” “Is too!”
Discussion with the outside view: “Let’s get root beer.” “What? Root beer sucks!” “You don’t find root beer yummy?” “No. Blech.” “OK, I’m getting a root beer.” “And I pick pepsi.”
If you’ve tied yourself up in conceptual knots, and concluded that root beer really isn’t yummy for me, even though my yummy detector fires whenever I have root beer, you’re just confused and not talking about reality.
But the fact that XYZ are the speaker’s terminal values is not part of the meaning of “right”
This is the problem. You’ve divorced your definition from the relevant part of reality—the speaker’s terminal values, and somehow twisted it around to where what he *should” do is at odds with his terminal values. This definition is not useful for discussing moral issues with the given speaker. He’s a machine that maximizes his terminal values. If his algorithms are functioning properly, he’ll disregard your definition as irrelevant to achieving his ends. Whether from the inside view of morality for that speaker, or his outside view, you’re just wrong. And you’re also wrong from any outside view that accurately models what terminal values people actually have.
Rational discussions of morality start with the observation that people have differing terminal values. Our terminal values are our ultimate biases. Recognizing that my biases are mine, and not identical to yours, is the first step away from the usual useless babble in moral philosophy.
From the comment by Richard Chappell:
People really think EY is saying this? It looks to me like a basic Egoist stance, where “your values” also include your moral preferences. That is my position, but I don’t think EY is on board.
“Shut up and multiply” implies a symmetry in value between different people that isn’t implied by the above. Similarly, the diversion into mathematical idealization seemed like a maneuver toward Objective Morality—One Algorithm to Bind Them, One Algorithm to Rule them All. Everyone gets their own algorithm as the standard of right and wrong? Fantastic, if it were true, but that’s not how I read EY.
It’s strange, because Richard seems to say that EY agrees with me, while I think EY agrees with him.
I think you are mixing up object-level ethics and metaethics here. You seem to be contrasting an Egoist position (“everyone should do what they want”) with an impersonal utilitarian one (“everyone should do what is good for everyone, shutting up and multiplying”). But the dispute is about what “should”, “right” and related words mean, not about what should be done.
Eliezer (in Richard’s interpretation) says that when someone says “Action A is right” (or “should be done”), the meaning of this is roughly “A promotes ultimate goals XYZ”. Here XYZ is in fact the outcome of a complicated computation based from of the speaker’s state of mind, which can be translated roughly as “the speaker’s terminal values” (for example, for a sincere philanthropist XYZ might be “everyone gets joy, happiness, freedom, etc”). But the fact that XYZ are the speaker’s terminal values is not part of the meaning of “right”, so it is not inconsistent for someone to say “Everyone should promote XYZ, even if they don’t want it” (e.g. “Babyeaters should not eat babies”). And needless to say, XYZ might include generalized utilitarian values like “everyone gets their preferences satisfied”, in which case impersonal, shut-up-and-multiply utilitarianism is what is needed to make actual decisions for concrete cases.
Of course it’s about both. You can define labels in any way you like. In the end, your definition better be useful for communicating concepts with other people, or it’s not a good definition.
Let’s define “yummy”. I put food in my mouth. Taste buds fire, neural impulses propagate fro neuron to neuron, and eventually my mind evaluates how yummy it is. Similar events happen for you. Your taste buds fire, your neural impulses propagate, and your mind evaluates how yummy it is. Your taste buds are not mine, and your neural networks are not mine, so your response and my response are not identical. If I make a definition of “yummy” that entails that what you find yummy is not in fact yummy, I’ve created a definition that is useless for dealing with the reality of what you find yummy.
From my inside view of yummy, of course you’re just wrong if you think root beer isn’t yummy—I taste root beer, and it is yummy. But being a conceptual creature, I have more than the inside view, I have an outside view as well, of you, and him, and her, and ultimately of me too. So when I talk about yummy with other people, I recognize that their inside view is not identical to mine, and so use a definition based on the outside view, so that we can actually be talking about the same thing, instead of throwing our differing inside views at each other.
Discussion with the inside view: “Let’s get root beer.” “What? Root beer sucks!” “Root beer is yummy!” “Is not!” “Is too!”
Discussion with the outside view: “Let’s get root beer.” “What? Root beer sucks!” “You don’t find root beer yummy?” “No. Blech.” “OK, I’m getting a root beer.” “And I pick pepsi.”
If you’ve tied yourself up in conceptual knots, and concluded that root beer really isn’t yummy for me, even though my yummy detector fires whenever I have root beer, you’re just confused and not talking about reality.
This is the problem. You’ve divorced your definition from the relevant part of reality—the speaker’s terminal values, and somehow twisted it around to where what he *should” do is at odds with his terminal values. This definition is not useful for discussing moral issues with the given speaker. He’s a machine that maximizes his terminal values. If his algorithms are functioning properly, he’ll disregard your definition as irrelevant to achieving his ends. Whether from the inside view of morality for that speaker, or his outside view, you’re just wrong. And you’re also wrong from any outside view that accurately models what terminal values people actually have.
Rational discussions of morality start with the observation that people have differing terminal values. Our terminal values are our ultimate biases. Recognizing that my biases are mine, and not identical to yours, is the first step away from the usual useless babble in moral philosophy.