Inseparably Right; or, Joy in the Merely Good
Followup to: The Meaning of Right
I fear that in my drive for full explanation, I may have obscured the punchline from my theory of metaethics. Here then is an attempted rephrase:
There is no pure ghostly essence of goodness apart from things like truth, happiness and sentient life.
What do you value? At a guess, you value the life of your friends and your family and your Significant Other and yourself, all in different ways. You would probably say that you value human life in general, and I would take your word for it, though Robin Hanson might ask how you’ve acted on this supposed preference. If you’re reading this blog you probably attach some value to truth for the sake of truth. If you’ve ever learned to play a musical instrument, or paint a picture, or if you’ve ever solved a math problem for the fun of it, then you probably attach real value to good art. You value your freedom, the control that you possess over your own life; and if you’ve ever really helped someone you probably enjoyed it. You might not think of playing a video game as a great sacrifice of dutiful morality, but I for one would not wish to see the joy of complex challenge perish from the universe. You may not think of telling jokes as a matter of interpersonal morality, but I would consider the human sense of humor as part of the gift we give to tomorrow.
And you value many more things than these.
Your brain assesses these things I have said, or others, or more, depending on the specific event, and finally affixes a little internal representational label that we recognize and call “good”.
There’s no way you can detach the little label from what it stands for, and still make ontological or moral sense.
Why might the little ‘good’ label seem detachable? A number of reasons.
Mainly, that’s just how your mind is structured—the labels it attaches internally seem like extra, floating, ontological properties.
And there’s no one value that determines whether a complicated event is good or not—and no five values, either. No matter what rule you try to describe, there’s always something left over, some counterexample. Since no single value defines goodness, this can make it seem like all of them together couldn’t define goodness. But when you add them up all together, there is nothing else left.
If there’s no detachable property of goodness, what does this mean?
It means that the question, “Okay, but what makes happiness or self-determination, good?” is either very quickly answered, or else malformed.
The concept of a “utility function” or “optimization criterion” is detachable when talking about optimization processes. Natural selection, for example, optimizes for inclusive genetic fitness. But there are possible minds that implement any utility function, so you don’t get any advice there about what you should do. You can’t ask about utility apart from any utility function.
When you ask “But which utility function should I use?” the word should is something inseparable from the dynamic that labels a choice “should”—inseparable from the reasons like “Because I can save more lives that way.”
Every time you say should, it includes an implicit criterion of choice; there is no should-ness that can be abstracted away from any criterion.
There is no separable right-ness that you could abstract from pulling a child off the train tracks, and attach to some other act.
Your values can change in response to arguments; you have metamorals as well as morals. So it probably does make sense to think of an idealized good, or idealized right, that you would assign if you could think of all possible arguments. Arguments may even convince you to change your criteria of what counts as a persuasive argument. Even so, when you consider the total trajectory arising out of that entire framework, that moral frame of reference, there is no separable property of justification-ness, apart from any particular criterion of justification; no final answer apart from a starting question.
I sometimes say that morality is “created already in motion”.
There is no perfect argument that persuades the ideal philosopher of perfect emptiness to attach a perfectly abstract label of ‘good’. The notion of the perfectly abstract label is incoherent, which is why people chase it round and round in circles. What would distinguish a perfectly empty label of ‘good’ from a perfectly empty label of ‘bad’? How would you tell which was which?
But since every supposed criterion of goodness that we describe, turns out to be wrong, or incomplete, or changes the next time we hear a moral argument, it’s easy to see why someone might think that ‘goodness’ was a thing apart from any criterion at all.
Humans have a cognitive architecture that easily misleads us into conceiving of goodness as something that can be detached from any criterion.
This conception turns out to be incoherent. Very sad. I too was hoping for a perfectly abstract argument; it appealed to my universalizing instinct. But...
But the question then becomes: is that little fillip of human psychology, more important than everything else? Is it more important than the happiness of your family, your friends, your mate, your extended tribe, and yourself? If your universalizing instinct is frustrated, is that worth abandoning life? If you represented rightness wrongly, do pictures stop being beautiful and maths stop being elegant? Is that one tiny mistake worth forsaking the gift we could give to tomorrow? Is it even really worth all that much in the way of existential angst?
Or will you just say “Oops” and go back to life, to truth, fun, art, freedom, challenge, humor, moral arguments, and all those other things that in their sum and in their reflective trajectory, are the entire and only meaning of the word ‘right’?
Here is the strange habit of thought I mean to convey: Don’t look to some surprising unusual twist of logic for your justification. Look to the living child, successfully dragged off the train tracks. There you will find your justification. What ever should be more important than that?
I could dress that up in computational metaethics and FAI theory—which indeed is whence the notion first came to me—but when I translated it all back into human-talk, that is what it turned out to say.
If we cannot take joy in things that are merely good, our lives shall be empty indeed.
Part of The Metaethics Sequence
Next post: “Sorting Pebbles Into Correct Heaps”
Previous post: “Morality as Fixed Computation”
- Raising the Sanity Waterline by 12 Mar 2009 4:28 UTC; 239 points) (
- Value is Fragile by 29 Jan 2009 8:46 UTC; 170 points) (
- Morality is Awesome by 6 Jan 2013 15:21 UTC; 146 points) (
- The Fun Theory Sequence by 25 Jan 2009 11:18 UTC; 95 points) (
- Prolegomena to a Theory of Fun by 17 Dec 2008 23:33 UTC; 67 points) (
- What is Eliezer Yudkowsky’s meta-ethical theory? by 29 Jan 2011 19:58 UTC; 49 points) (
- Unspeakable Morality by 4 Aug 2009 5:57 UTC; 33 points) (
- Moral Error and Moral Disagreement by 10 Aug 2008 23:32 UTC; 26 points) (
- The Bedrock of Morality: Arbitrary? by 14 Aug 2008 22:00 UTC; 25 points) (
- (Moral) Truth in Fiction? by 9 Feb 2009 17:26 UTC; 25 points) (
- Singletons Rule OK by 30 Nov 2008 16:45 UTC; 23 points) (
- “Arbitrary” by 12 Aug 2008 17:55 UTC; 19 points) (
- 30 Jan 2011 0:10 UTC; 14 points) 's comment on What is Eliezer Yudkowsky’s meta-ethical theory? by (
- 13 Jun 2012 7:16 UTC; 10 points) 's comment on In Praise of Boredom by (
- 5 Jan 2013 11:57 UTC; 8 points) 's comment on Morality is Awesome by (
- 11 Apr 2009 13:01 UTC; 8 points) 's comment on Maybe Theism Is OK—Part 2 by (
- [SEQ RERUN] Inseparably Right; or, Joy in the Merely Good by 25 Jul 2012 6:16 UTC; 6 points) (
- 18 Jun 2009 0:20 UTC; 4 points) 's comment on Rationalists lose when others choose by (
- 24 Feb 2023 0:27 UTC; 3 points) 's comment on Hello, Elua. by (
- 5 Dec 2012 18:32 UTC; 3 points) 's comment on Welcome to Less Wrong! (July 2012) by (
- 20 Apr 2013 17:17 UTC; 3 points) 's comment on Welcome to Less Wrong! (5th thread, March 2013) by (
- 12 Mar 2009 21:30 UTC; 2 points) 's comment on Raising the Sanity Waterline by (
- 18 Jan 2013 19:26 UTC; 0 points) 's comment on Rationality Quotes January 2013 by (
- 6 Nov 2012 8:46 UTC; 0 points) 's comment on Wanting to Want by (
- 30 May 2012 16:05 UTC; 0 points) 's comment on Holden’s Objection 1: Friendliness is dangerous by (
- 30 May 2014 5:20 UTC; 0 points) 's comment on Strong moral realism, meta-ethics and pseudo-questions. by (
Eliezer, thank you for this clear explanation. I’m just now making the connection to your calculator example, which struck me as relevant if I could only figure out how. Now it’s all fitting together.
How does this differ from personal preference? Or is it simply broader in scope? That is, if an individual’s calculation includes “self-interest” and weighs it heavily, personal preference might be the result of the calculation, which fits inside your metamoral model, if I’m reading things correctly.
Most goods don’t depend justificationally on your state of mind, even though that very judgment is implemented computationally by your state of mind. A personal preference depends justificationally on your state of mind.
If we cannot take joy in things that are merely good, our lives shall be empty indeed I suppose the ultimate in emptiness is non-existence. What’s your opinion on anti-natalism?
Eliezer, you write, “Most goods don’t depend justificationally on your state of mind, even though that very judgment is implemented computationally by your state of mind. A personal preference depends justificationally on your state of mind.”
Could you elaborate on this distinction? (IIRC, most of what you’ve written explicitly on the difference between preference and morality was in your dialogues, and you’ve warned against attributing any views in those dialogues to you.)
In particular, in what sense do “personal preferences depend justificationally on your state of mind”? If I want to convince someone to prefer rocky road ice cream over almond praline, I would most likely proceed by telling them about the ingredients in rocky road that I believe that they like more than the ingredients in almond praline. Suppose that I know that you prefer walnuts over almonds. Then my argument would include lines like “rocky road contains walnuts, and almond praline contains almonds.” These would not be followed by something like ”… and you prefer walnuts over almonds.” Yes, I wouldn’t have offered the comparison if I didn’t believe that that was the case, but, so far as the structure of the argument is concerned, such references to your preferences would be superfluous. Rather, as you’ve explained with morality, I would be attempting to convince you that rocky road has certain properties. These properties are indeed the ones that I think will make the system of preferences within you prefer rocky road over almond praline. And, as with morality, that system of preferences is a determinate computational property of your mind as it is at the moment. But, just as in your account of moral justification as I understand it, I don’t need to refer to that computational property to make my case. I will just try to convince you that the facts are such that certain things are to be found in rocky road. These are things that happen to be preferred by your preference system, but I won’t bother to try to convince you of that part.
Actually, the more I think about this ice cream example, the more I wonder whether you wouldn’t consider it to be an example of moral justification. So, I’m curious to know an example of what you would consider to be a personal preference but not a moral preference.
I too was hoping for a perfectly abstract argument; it appealed to my universalizing instinct. But...
Not to mention your FAI-coding instincts, huh?
Good summarizing post.
Good post, Eliezer. Now that I’ve read it (and the previous one), I can clearly see (I think) why you think CEV is a good idea, and how you arrived at it. And now I’m not as skeptical about it as I was before.
Ben, my FAI-coding instincts at the time were pretty lousy. The concept does not appeal to my modern instinct; and calling the instinct I had back then an “FAI-coding” one is praising it too highly.
Tyrrell, the distinction to which I refer, is the role that “Because I like walnuts over almonds” plays in my justification for choosing rocky road, and presumably your motive for convincing me thereof if you’re an altruist. We can see the presence of this implicit justification, whether or not it is mentioned, by asking the following moral question: “If-counterfactual I came to personally prefer almonds over walnuts, would it be right for me to choose praline over rocky road?” The answer, “Yes”, reveals that that there is an explicit, quoted, justificational dependency, in the moral computation, on my state of mind and preference.
This is not to be confused with a physical causal dependency of my output on my brain, which always exists, even for the calculator that asks only “What is 2 + 3?” The calculator’s output depends on its transistors, but it has not asked a personal-preference-dependent question.
Beautiful, and very true indeed. Nothing new, but your way of expression is so elegant! Your mind is clear and genuine, this fills me with joy and hope!
[deleted]
I think everything you say in this post is correct. But there’s nothing like a universal agreement as to what is “good”, and although our ideas as to what is good will change over time, I see no reason to believe that they will converge.
@Eliezer:
The problem that arises with this point of view is that you have not defined one rightness, you have defined approximately 6 billion rightnesses, one for each person on the planet, and they are all different. Some—perhaps most of them—are not views that the readers of this blog would identify with.
The question of whose rightness gets to go into the AI still arises, and I don’t think that the solution you have outlined is really up to the task of producing a notion of rightness that everyone on the planet agrees with. Not that I blame you: it’s an impossible task!
I concede that the ethical system for a superintelligent seed AI is not the place to try out new moral theories. The ideal situation would be one where the change of substrate—of intelligence moving from flesh to silicon—is done without any change of ethical outlook, so as to minimize the risk of something uncalled for happening.
I would endorse a more limited effort which focused on recreating the most commonly accepted values our society: namely rational western values. I would also want to work on capturing the values of our society as a narrow AI problem before one switches on a putative seed AGI. Such an effort might involve extensive data mining and testing and calibration in real world. This would come closer to the ideal of minimizing the amount that mind changes whilst substrate changes. Attempting to synthesize and extrapolate the widely differing values of every human on the planet is something that has never been attempted before, and is a bad idea to try and do anything new and risky at the same time as switching on a seed AI.
I think that there is a lot to be said about realist and objective ethics: the application of such work is not to seed AI, though. It is to the other possible routes to superintelligence and advanced technology, which will likely happen under the guidance of human society at large. Technology policy decisions require an ethical and value outlook, so it is worth thinking about how to simplify and unify human values. This doesn’t actually contradict what you’ve said: you talk about the
“total trajectory arising out of that entire framework”
and for me, as for many philosophically minded people, attempting to unify and simplify our value framework is part of the trajectory.
I think that ethical guidance for technology policy decisions is probably marginally more urgent than ethical guidance for seed AIs—merely because there is very little chance of anyone writing a recursively self-improving seed AI in the next 10 years. In the future this will probably change. I still think that ethical systems for seed AI is an extremely important task.
That’s not what CEV is for. It’s for not taking over the world, or if you prefer, not being a jerk, to the maximum extent possible. The maximum extent impossible is not really on the table.
Then you have very little perspective on your place in history, my dear savage barbarian child.
That ain’t a narrow AI problem and you ain’t doin’ it with no narrow AI.
My metaethics is real and objective, just not universal. Fixed computations are objective, and at least halfway real.
It seems to me human life has value insofar as dead people can’t be happy, discover truth, and so on; but not beyond that.
Also I’d like to second TGGP’s question.
My position on natalism is as follows: If you can’t create a child from scratch, you’re not old enough to have a baby.
This rule may be modified under extreme and unusual circumstances, such as the need to carry on the species in the pre-Singularity era, but I see no reason to violate it under normal conditions.
Do you still hold this position?
Presumably anti-natalists would deny the need to carry on the species because they expect the negative value of future suffering to outweigh the positive value of future happiness, truth, etc.
@ Eliezer: “My metaethics is real and objective, just not universal. Fixed computations are objective, and at least halfway real.”
see wikipedia: - According to the ethical objectivist, the truth or falsity of typical moral judgments does not depend upon the beliefs or feelings of any person or group of persons. This view holds that moral propositions are analogous to propositions about chemistry, biology, or history: they describe (or fail to describe) a mind-independent reality. When they describe it accurately, they are true—no matter what anyone believes, hopes, wishes, or feels.
Yes, Roko, and the answer to the question “Was the child successfully dragged off the train tracks?” does not depend on the belief or feelings of any person or group of persons; if the child is off the train tracks, that is true no matter what anyone believes, hopes, wishes, or feels. As this is what I identify with the meaning of the term, ‘good’...
@Eliezer: “As this is what I identify with the meaning of the term, ‘good’...”
I’m still a little cloudy about one thing though Eliezer, and this seems to be the point Roko is making as well. Once you have determined what physically has happened in a situation, and what has caused it, how do inarguably decide that it is “good” or “bad”? Based on what system of prefering one physical state over another?
Obviously, saving a child from death is good, but how do you decide in trickier situations where intuition can’t do the work for you, and where people just can’t agree on anything, like say, abortion?
All you’ve done is write down your own beliefs and feelings (that it is a good thing that the child was pulled off the train tracks), reified it, and then claimed objectivity. But clearly, if you had had a different belief in the first place, you would have reified a different question/notion of morality. Yes, it is an objective fact that that is is what you think is moral, but I feel that this is unhelpful.
And, of course, this lack of objectivity leads to problems, because different people will have their own notions of goodness. My notion of goodness may be slightly different to yours—how can we have a sensible conversation where you insist on using the word “morality” to refer to morality_Eliezer2008? (Or worse still, where you use “moral” to mean “the morality that CEV outputs”)
I think the child on train tracks/orphan in burning building tropes you reference back to prey on bias, rather than seek to overcome it. And I think you’ve been running from hard questions rather than dealing with them forthrightly (like whether we should give primacy to minimizing horrific outcomes or to promoting social aesthetics like “do not murder children” or minimizing horrific outcomes). To me this sums up to you picking positions for personal status enhancement rather than for solving the challenges we face. I understand why that would be salient for a non-anonymous blogger. I hope you at least do your best to address them anonymously. Otherwise we could be left with a tragedy of the future outcomes commons, with all the thinkers vying for status over maximizing our future outcomes.
should read: (like whether we should give primacy to minimizing horrific outcomes or to promoting social aesthetics like “do not murder children”).
“My notion of goodness may be slightly different to yours—how can we have a sensible conversation where you insist on using the word “morality” to refer to morality_Eliezer2008?”
This is an important objection, which I think establishes the inadequacy of Eliezer’s analysis. It’s a datum (which any adequate metaethical theory must account for) that there can be substantive moral disagreement. When Bob says “Abortion is wrong”, and Sally says, “No it isn’t”, they are disagreeing with each other.
I don’t see how Eliezer can accommodate this. On his account, what Bob asserted is true iff abortion is prohibited by the morality_Bob norms. How can Sally disagree? There’s no disputing (we may suppose) that abortion is indeed prohibited by morality_Bob. On the other hand, it would be changing the subject for Sally to say “Abortion is right” in her own vernacular, where this merely means that abortion is permitted by the morality_Sally norms. (Bob wasn’t talking about morality_Sally, so their two claims are—on Eliezer’s account—quite compatible.)
Since there is moral disagreement, whatever Eliezer purports to be analysing here, it is not morality.
[For more detail, see ’Is Normativity Just Semantics?]
Roko: “And, of course, this lack of objectivity leads to problems, because different people will have their own notions of goodness.”
Don’t forget the psychological unity of mankind. Whatever is in our DNA that makes us care about morality at all is a complex adaptation, so it must be pretty much the same in all of us. That doesn’t mean everyone will agree about what is right in particular cases, because they have considered different moral arguments (or in some cases, confused mores with morals), but that-which-responds-to-moral-arguments is the same.
Richard: Abortion isn’t a moral debate. The only reason people disagree about it is because some of them don’t understand what souls are made of, and some of them do. Abortion is a factual debate about the nature of souls. If you know the facts, the moral conclusions are indisputable and obvious.
Larry, not that the particular example is essential to my point, but you’re clearly not familiar with the strongest pro-life arguments.
There’s no disputing (we may suppose) that abortion is indeed prohibited by morality_Bob.
This isn’t the clearest example, because it seems like abortion is one of those things everyone would come to agree on if they knew and understood all the arguments. A clearer example is a pencil-maximizing AI vs a paperclip-maximizing AI. Do you think that these two necessarily disagree on any facts? I don’t.
It’s a datum (which any adequate metaethical theory must account for) that there can be substantive moral disagreement. When Bob says “Abortion is wrong”, and Sally says, “No it isn’t”, they are disagreeing with each other.
I wonder though: is this any more mysterious than a case where two children are arguing over whether strawberry or chocolate ice cream is better?
In that case, we would happily say that the disagreement comes from their false belief that it’s a deep fact about the universe which ice cream is better. If Eliezer is right (I’m still agnostic about this), wouldn’t moral disagreements be explained in an analogous way?
Richard: You were correct. That is indeed the strongest pro-life argument I’ve ever read. And although it is quite wrong, the error is one of moral reasoning and not merely factual.
HA: To me this sums up to you picking positions for personal status enhancement rather than for solving the challenges we face. I understand why that would be salient for a non-anonymous blogger. I hope you at least do your best to address them anonymously. Otherwise we could be left with a tragedy of the future outcomes commons, with all the thinkers vying for status over maximizing our future outcomes.
If you were already blogging and started an anonymous blog, how would you avoid giving away your identity in your anonymous blog through things like your style of thinking, or the sort of background justifications you use? It doesn’t seem to me like it could be done.
It seems to me like the word “axioms” belongs in here somewhere.
Eliezer, sure, but that can’t be the whole story. I don’t care about some of the stuff most people care about. Other people whose utility functions differ in similar but different ways from the social norm are called “psychopaths”, and most people think they should either adopt their morals or be removed from society. I agree with this.
So why should I make a special exception for myself, just because that’s who I happen to be? I try to behave as if I shared common morals, but it’s just a gross patch. It feels tacked on, and it is.
I expected (though I had no idea how) you’d come up with an argument that would convice me to fully adopt such morals. But what you said would apply to any utility function. If a paperclip maximizer wondered about morality, you could tell it: “‘Good’ means ‘maximizes paperclips’. You can think about it all day long, but you’d just end up making a mistake. Is that worth forsaking the beauty of tiling the universe with paperclips? What do you care there exists in mindspace minds that drag children off train tracks?” and it’d work just as well. Yet if you could, I bet you’d choose to make the paperclip maximizer adopt your morals.
Hi, This discussion borders on a thought that I’ve had for a long time, and haven’t quite come to terms with; and that is the idea that there are places were reason can explain things, and places where reason cannot explain things, the latter being by far the more frequent. It seems to me that the basis of most of our actions, motivations, thoughts are really grounded in feelings/desires/emotions...… what we want, what we like, what we want to be.… and that the application of reason to our lives, is in most cases.… a means of justifying and acting on these feelings and desires. We are not rational creatures. We can, and do, apply reason very effectively to certain areas of human endeavour, but in most of the things we do, it’s not very effective. I’m not knocking reason.… it can be very useful. I’m sure that I have not explained myself very well. Perhaps someone with more knowledge and insight into what I’m trying to say can flesh it out. I apologize if I did not address the issue under discussion, but it provided me with an opportunity to get this idea out, and see what others have to say about it. …… john