This is tangential to the point of the post, but “moral realism” is a much weaker claim than you seem to think. Moral realism only means that some moral claims are literally true. Popular uncontroversial examples: “torturing babies for fun is wrong” or “ceteris paribus, suffering is bad”. It doesn’t mean that someone is necessarily motivated by those claims if they believe they are true. It doesn’t imply that anyone is motivated to be good just from believing that something is good. A psychopath can agree “yes, doing X is wrong, but I don’t care about ethics” and shrug his shoulders. Moral realism doesn’t require a necessary connection between beliefs and desires; it is compatible with the (weak) orthogonality thesis.
You need to define what you mean by good for this to make sense. If good means “what you should do” then it’s exactly the big claim Steve is arguing against. If it means something else, what is it?
I do have ideas about what people mean by “good” other than “what you should do”, but they’re complex. That’s why I think you need to define the term more for this claim to make sense.
If good means “what you should do” then it’s exactly the big claim Steve is arguing against.
If Steve is saying that the moral facts need to be intrinsically motivating, that is a stronger claim than “the good is what you should do”, ie, it is the claim that “the good is what you would do”. But, as cubefox points out, being intrinsically motivating isn’t part of moral realism as defined in the mainstream. (it is apparently part of moral realism as defined in LW, because of something EY said years ago). Also, since moral realism is metaethical claim, there is no need to specify the good at object level.
I’d be happy to come back later and give my guesses at what people tend to mean by “good”; it’s something like “stuff people do whom I want on my team” or “actions that make me feel positively toward someone”.
Once again, theories aren’t definitions.
People don’t all have to have the same moral theory. At the same time, there has to be a common semantic basis for disagreement, rather than talking past, to take place. “The good is what you should do” is pretty reasonable as a shared definition, since it is hard to dispute, but also neutral between “the good” being define personally, tribally, or universally.
Good points. I think the term moral realism is probably used in a variety of ways in the public sphere. I think the relevant sense is “will alignment solve itself because a smart machine will decide to behave in a way we like”. If there’s some vague sense of stuff everyone “should” do, but it doesn’t make them actually do it, then it doesn’t matter for this purpose.
I was (and have been) making a theory about definitions.
I think “the good is what you should do” is remarkably devoid of useful meaning. People often mean very little by “should”, are unclear both to others and themselves, and use it in different ways in different situations.
My theory is that “good” is usually defined as an emotion, not another set of words, and that emotion roughly means “I want that person on my team” (when applied to behavior), because evolution engineered us to find useful teammates, and that feeling is its mechanism for doing so.
Good points. I think the term moral realism is probably used in a variety of ways in the public sphere. I think the relevant sense is “will alignment solve itself because a smart machine will decide to behave in a way we like”. If there’s some vague sense of stuff everyone “should” do, but it doesn’t make them actually do it, then it doesn’t matter for this purpose.
I think “the good is what you should do” is remarkably devoid of useful meaning. People often mean very little by “should”, are unclear both to others and themselves, and use it in different ways in different situations.
For understanding human ethics, the important thing is that it grounds out in punishments and rewards—the good is what you should do , and if you don’t do it, you face punishment. Another thing that means is that a theory of ethics must be sufficient to justify putting people in jail. But a definition is not a theory.
My theory is that “good” is usually defined as an emotion, not another set of words, and that emotion roughly means “I want that person on my team” (when applied to behavior),
If your whole theory of ethics is to rubber stamp emotions or opinions, you end up with a very superficial theory that is open to objections like the Open Question argument. Just because somebody feels it is good to do X does not mean it was necessarily is—it is an open question. If the good is your emotions , then it is a closed question...your emotions are your emotions , likewise your values are your values, and your opinions are your opinions. The openness of the question “you feel that X is good, but is it really?” is a *theoretical” reason for believing that “goodness” works more like “truth” and less like “belief”.
(And the OQA is quite likely what this passage by Nostalgebraist hints at:-
*Who shoots down the enemy soldiers while thinking, “if I had been born there, it would have been all-important for their side to win, and so I would have shot at the men on this side. However, I was born in my country, not theirs, and so it is all-important that my country should win, and that theirs should lose.
There is no reason for this. It could have been the other way around, and everything would be left exactly the same, except for the ‘values.’
I cannot argue with the enemy, for there is no argument in my favor. I can only shoot them down.)
because evolution engineered us to find useful teammates, and that feeling is its mechanism for
And having gathered our team to fight the other team, we can ask ourselves whether we might actually be the baddies.
The *practical* objection kicks in when there are conflicts between subjective views.
A theory of ethics needs to justify real world actions—especially actions that impact other people , especially actions that impact other people negatively.( It’s not just about passively understanding the world, about ‘what anticipated experiences come about from the belief that something is “good” or “bad”?’)Why should someone really go to jail ,if they havent really done anything wrong? Well, if the good is what you should do, jailing people is justifiable , because the kind of ting you shouldn’t do is the kind of thing you deserve punishment for.
Of course, the open question argument doesn’t take you all the way to full strength moral realism. Less obviously, there are many alternatives to MR. Nihilism is one: you can’t argue that emotivism is true because MR is false—emotivism might be wrong because ethics is nothing. Emotivism might also be wrong because some position weaker than MR is right.
I don’t think anyone needs to define what words used in ordinary language mean because the validity of any attempt of such a definition would itself have to be checked against the intuitive meaning of the word in common usage.
If good means “what you should do” then it’s exactly the big claim Steve is arguing against.
I do think the meaning is indeed similar (except of supererogatory statements), but the argument isn’t affected. For example, I can believe that I shouldn’t eat meat, or that eating meat is bad, without being motivated to stop eating meat.
I have no idea what you mean by your claim if you won’t define the central term. Or I do, but I’m just guessing. I think people are typically very vague in what they mean by “good”, so it’s not adequate for analytical discussion. In this case, a vague sense of good produces only a vague sense in which “moral realism” isn’t a strong claim. I just don’t know what you mean by that.
I’d be happy to come back later and give my guesses at what people tend to mean by “good”; it’s something like “stuff people do whom I want on my team” or “actions that make me feel positively toward someone”. But it would require a lot more words to even start nailing down. And while that’s a claim about reality, it’s quite a complex, dependent, and therefore vague claim, so I’d be reluctant to call it moral realism. Although it is in one sense. So maybe that’s what you mean?
Almost all terms in natural language are vague, but that doesn’t mean they are all ambiguous or somehow defective and in need of an explicit definition. We know what words mean, we can give examples, but we don’t have definitions in our mind. Imagine you say that believing X is irrational, and I reply “I don’t believe in ‘rational realism’, I think ‘rational’ is a vague term, can you give me a definition of ‘rational’ please?” That would be absurd. Of course I know what rational means, I just can’t define it, but we humans can hardly define any natural language terms at all.
it’s something like “stuff people do whom I want on my team” or “actions that make me feel positively toward someone”. But it would require a lot more words to even start nailing down. And while that’s a claim about reality, it’s quite a complex, dependent, and therefore vague claim, so I’d be reluctant to call it moral realism.
That would indeed not count as moral realism, the form of anti-realism would probably be something similar to subjectivism (“x is good” ≈ “I like X”) or expressivism (“x is good” ≈ “Yay x!”).
But I don’t think this can make reasonable sense of beliefs. That I believe something is good doesn’t mean that I feel positive toward myself, or that I like it, or that I’m cheering for myself, or that I’m booing my past self if I changed my mind. Sometimes I may also just wonder whether something is good or bad (e.g. eating meat) which arguably makes no sense under those interpretations.
Imagine you say that believing X is irrational, and I reply “I don’t believe in ‘rational realism’, I think ‘rational’ is a vague term, can you give me a definition of ‘rational’ please?” That would be absurd. Of course I know what rational means, I just can’t define it, but we humans can hardly define any natural language terms at all.
I don’t think I could disagree any more strongly about this. In fact, I am kind of confused about your choice of example, because ‘rationality’ seems to me like such a clear counter to your argument. It is precisely the type of slippery concept that is portrayed inaccurately (relative to LW terminology) in mainstream culture and thus inherently requires a more rigorous definition and explanation. This was so important that “the best intro-to-rationality for the general public” (according to @lukeprog) specifically addressed the common misconception that being rational means being a Spock-like Straw Vulcan. It was so important that one of the crucial posts in the first Sequence by Eliezer spends almost 2000 words defining rationality. So important that, 14 years later, @Raemon had to write yet another post (with 150 upvotes) explaining what rationality is not, as a result of common and lasting confusions by users on this very site (presumably coming about as a result of the original posts not clarifying matters sufficiently).
What about the excellent and important post “Realism about Rationality” by Richard Ngo, which expresses “skepticism” about the mindset he calls “realism about rationality,” thus disagreeing with others who do think “this broad mindset is mostly correct, and the objections outlined in this essay are mostly wrong” and argue that “we should expect a clean mathematical theory of rationality and intelligence to exist”? Do you “of course know what rationality means” if you cannot settle as important a question as this? What about Bryan Caplan’s arguments that a drug addict who claims they want to stop buying drugs but can’t prevent themselves from doing so is actually acting perfectly rationally, because, in reality, their revealed preferences show that they really do want to consume drugs, and are thus rationally pursuing those goals by buying them? Caplan is a smart person expressing serious disagreement with the mainstream, intuitive perceptions of rationality and human desires; this strongly suggests that rationality is indeed, as you put it, “ambiguous or somehow defective and in need of an explicit definition.”
It wouldn’t be wrong to say that LessWrong was built to advance the study of rationality, both as it relates to humans and to AI. The very basis of this site and of the many Sequences and posts expanding upon these ideas is the notion that our understanding of rationality is currently inadequate and needs to be straightened out.
That I believe something is good doesn’t mean that I feel positive toward myself, or that I like it, or that I’m cheering for myself, or that I’m booing my past self if I changed my mind. Sometimes I may also just wonder whether something is good or bad (e.g. eating meat) which arguably makes no sense under those interpretations.
What anticipated experiences come about from the belief that something is “good” or “bad”? This is the critical question, which I have not seen a satisfactory answer to by moral realists (Eliezer himself does have an answer to this on the basis of CEV, but that is a longer discussion for another time). And if there is no answer, then the concept of “moral facts” becomes essentially useless, like any other belief that pays no rent.
There is a large difference between knowing the meaning of a word, and knowing its definition. You know perfectly well how to use ordinary words like “knowledge” or “game”, in that sense you understand what they mean, yet you almost certainly don’t know an adequate (necessary and sufficient) definition for them, i.e. one that doesn’t suffer from counterexamples. In philosophy those are somewhat famous cases of words that are hard to define, but most words from natural language could be chosen instead.
That’s not to say that definition is useless, but it’s not something we need when evaluating most object level questions. Answering “Do you know where I left my keys?” doesn’t require a definition for “knowledge”. Answering “Is believing in ghosts irrational?” doesn’t require a definition of “rationality”. And answering “Is eating Bob’s lunch bad?” doesn’t require a definition of “bad”.
Attempts of finding such definitions is called philosophy, or conceptual analysis specifically. It helps with abstract reasoning by finding relations between concepts. For example, when asked explicitly, most people can’t say how knowledge and belief relate to each other (I tried). Philosophers would reply that knowledge implies belief but not the other way round, or that belief is internal while knowledge is (partly) external. In some cases knowing this is kind of important, but usually it isn’t.
What anticipated experiences come about from the belief that something is “good” or “bad”? This is the critical question, which I have not seen a satisfactory answer to by moral realists (Eliezer himself does have an answer to this on the basis of CEV, but that is a longer discussion for another time).
Well, why not try to answer it yourself? I’d say evidence for something being “good” is approximately when we can expect that it increases general welfare, like people being happy or able to do what they want. I directionally agree with EY’s extrapolated volition explication of goodness (I linked to it in a neighboring comment). As he mentions, there are several philosophers who have provided similar analyses.
You know perfectly well how to use ordinary words like “knowledge” or “game”, in that sense you understand what they mean, yet you almost certainly don’t know an adequate (necessary and sufficient) definition for them, i.e. one that doesn’t suffer from counterexamples.
It is interesting that you chose the example of “knowledge” because I think that is yet another illustration of the complete opposite of the position you are arguing for. I was not born with an intuitive understanding of Bayesianism, for example. However, I now consider anyone who hasn’t grasped Bayesian thinking (such as previous versions of me) but is nonetheless trying to seriously reason about what it means to know something to be terribly confused and to have a low likelihood of achieving anything meaningful in any non-intuitive context where formalizing/using precise meanings of knowledge is necessary. I would thus say that the vast majority of people who use ordinary words like “knowledge” don’t understand what they mean (or, to be more precise, they don’t understand the concepts that result from carving reality at its joints in a coherent manner).
That’s not to say that definition is useless, but it’s not something we need when evaluating most object level questions.
I don’t care about definitions per se. The vast majority of human concepts and mental categories don’t work on the basis of necessary and sufficient conditions anyway, so an inability to supply a fully generalizable definition for something is caused much more by the fundamental failings of our inadequate language than by issues with our conceptual formation. Nevertheless, informal and non-rigorous thinking about concepts can easily lead into confusion and the reification of ultimately nonsensical ideas if they are not subject to enough critical analysis in the process.
or conceptual analysis specifically
Given my previous paragraph, I don’t think you would be surprised to hear that I find conceptual analysis to be virtually useless and a waste of resources, for basically the reasons laid out in detail by @lukeprog in “Concepts Don’t Work That Way” and “Intuitions Aren’t Shared That Way” almost 12 years ago. His (in my view incomplete) sequence on Rationality and Philosophy is as much a part of LW’s core as Eliezer’s own Sequences are, so while reasonable disagreement with it is certainly possible, I start with a very strong prior that it is correct, for purposes of our discussion.
Well, why not try to answer it yourself?
Well, I have tried to answer it myself, and after thinking about it very seriously and reading what people on all sides of the issue have thought about it, I have come to the conclusion that concepts of “moral truth” are inherently confused, pay no rent in anticipated experiences, and are based upon flaws in thinking that reveal how common-sensical intuitions are totally unmoored from reality when you get down to the nitty-gritty of it. Nevertheless, given the importance of this topic, I am certainly willing to change my mind if presented with evidence.
I’d say evidence for something being “good” is approximately when we can expect that it increases general welfare, like people being happy or able to do what they want.
That might well be evidence (in the Bayesian sense) that a given act, value, or person belongs to a certain category which we slap the label “good” onto. But it has little to do with my initial question. We have no reason to care about the property of “goodness” at all if we do not believe that knowing something is “good” gives us powerful evidence that allows us to anticipate experiences and to constrain the territory around us. Otherwise, “goodness” is just an arbitrary bag of things that is no more useful than the category of “bleggs” that is generated for no coherent reason whatsoever, or the random category “r398t”s that I just made up and contains only apples, weasels, and Ron Weasley. Indeed, we would not even have enough reason to raise the question of what “goodness” is in the first place.
To take a simple illustration of the difference between the conditions for membership in a category and the anticipated experiences resulting from “knowing” that something is a member of that category, consider groups in mathematics. The definition of a group is “a set together with a binary operation that satisfies the axioms of associativity, identity, and inverses.” But we don’t care about groups for reasons that deal only with these axioms; on the contrary, groups matter because they help model important situations in reality (such as symmetry groups in physics) and because we can tell a lot about the nature and structure of groups through mathematical reasoning. The fact that finite simple groups can be classified in a clear and concise manner is a consequence of their definition (not a formal precondition for their membership) and allows us to anticipate with extremely high (although not full) certainty that if we consider a finite simple group G, it will be isomorphic to one of the sample groups in the description above.
I don’t understand your point about anticipated experience. If I believe some action is good, I anticipate that doing that action will produce evidence (experience) that is indicative of increased welfare. That is exactly not like believing something to be “blegg”. Regarding mathematical groups, whether or not we care about them for their usefulness in physics seems not relevant for “group” to have a specific meaning. Like, you may not care about horses, but you still anticipate a certain visual experience when someone tells you they bought you a horse, it’s right outside. And for a group you’d anticipate that it turns out to satisfy associativity etc.
Yeah I oversimplified. :) I think “the literal definition of moral realism” is a bit different from “the important substantive things that people are usually talking about when they talk about moral realism”, and I was pointing at the latter instead of the former. For example:
It’s possible to believe: “there is a moral truth, but it exists in another realm to which we have no epistemic access or connection. For all we know, the true moral imperative is to maximize helium. We can never possibly know one way or the other, so this moral truth is entirely irrelevant to our actions and decisions. Tough situation we find ourselves in!”
This position is literally moral realism, but in practice this person will be hanging out with the moral antirealists (and nihilists!) when deciding what to do with their lives and why.
It’s possible to believe: “there is a moral truth, and it is inextricably bound up with entirely contingent (“random”) facts about the human brain and its innate drives. For example, maybe it turns out that “justice is part of true morality”, but if the African Savanna had had a different set of predators, then maybe we would be a slightly different but equally intelligent species having an analogous discussion, and we would be saying “justice is not part of true morality”, and nobody in this story has made any mistake in their logic. Rather, we are humans, and “morality” is our human word, so it’s fine if there’s contingent-properties-of-human-brains underlying what that word points to.”
I believe Eliezer would put himself in this camp, see my summary here.
Again, this position is literally moral realism, but it has no substantive difference whatsoever from a typical moral antirealism position. The difference is purely semantics / terminological choices. Just replace “true morality” with “true moralityhuman species” and so on. Again see here for details.
Anyway, my strong impression that a central property of moral realist claims—the thing that makes those claims substantively different from moral antirealism, in a way that feeds into pondering different things and making different decisions, the thing that most self-described moral realists actually believe, as opposed to the trivialities above—is that moral statements can be not just true but also that their truth is “universally accessible to reason and reflection” in a sense. That’s what you need for nostalgebraist’s attempted reductio ad absurdum (where he says: if I had been born in the other country, I would be holding their flag, etc.) to not apply. So that’s what I was trying to talk about. Sorry for leaving out these nuances. If there’s a better terminology for what I’m talking about, I’d be interested to hear it. :)
Eliezer has a more recent metaethical theory (basically “x is good” = “x increases extrapolated volition”) which is moral realist in a conventional way. He discusses it here. It’s approximately a form of idealized–preference utilitarianism.
the thing that most self-described moral realists actually believe, as opposed to the trivialities above—is that moral statements can be not just true but also that their truth is “universally accessible to reason and reflection” in a sense. That’s what you need for nostalgebraist’s attempted reductio ad absurdum
Well, the truth of something being “universally accessible to reason and reflection” would still just result in a belief, which is (per weak orthogonality) different in principle from a desire. And a desire would be needed for the reductio, otherwise we have just a psychopath AI that understands ethics perfectly well but doesn’t care about it.
Eliezer has a more recent metaethical theory (basically “x is good” = “x increases extrapolated volition”) which is moral realist in a conventional way. He discusses it here.
I don’t think that’s “moral realist in a conventional way”, and I don’t think it’s in contradiction with my second bullet in the comment above. Different species have different “extrapolated volition”, right? I think that link is “a moral realist theory which is only trivially different from a typical moral antirealist theory”. Just go through Eliezer’s essay and do a global-find-and-replace of “extrapolated volition” with “extrapolated volitionhuman species”, and “good” with “goodhuman species”, etc., and bam, now it’s a central example of a moral antirealist theory. You could not do the same with, say, metaethical hedonism without sucking all the force out of it—the whole point of metaethical hedonism is that it has some claim to naturalness and universality, and does not depend on contingent facts about life in the African Savanna. When I think of “moral realist in a conventional way”, I think of things like metaethical hedonism, right?
Well, Eliezer doesn’t explicitly restrict his theory to humans as far as I can tell. More generally, forms of utilitarianism (be it hedonic or preference oriented or some mixture) aren’t a priori restricted to any species. The point is also that some sort of utility is treated as an input to the theory, not a part of the theory. That’s no different between well-being (hedonic utilitarianism) or preferences. I’m not sure why you seem to think so. The African Savanna influenced what sort of things we enjoy or want, but these specifics don’t matter for general theories like utilitarianism or extrapolated volition. Ethics recommends general things like making individuals happy or satisfying their (extrapolated) desires, but ethics doesn’t recommend giving them, for example, specifically chocolate, just because they happen like (want/enjoy) chocolate for contingent reasons.
Ethics, at least according to utilitarianism, is about maximizing some sort of aggregate utility. E.g. justice isn’t just a thing humans happen to like. They refer to the aforementioned aggregate which doesn’t favor one individual over another. So while chocolate isn’t part of ethics, fairness is. An analysis of “x is good” as “x maximizes the utility of Bob specifically” wouldn’t capture the meaning of the term.
Claim:“Certain things—like maybe fairness, justice, beauty, and/or honesty—are Right / Good / Moral (and conversely, certain things like causing-suffering are Wrong / Bad) for reasons that don’t at all flow through contingent details of the African Savanna applying specific evolutionary pressures to our innate drives. In other words, if hominids had a different evolutionary niche in the African Savanna, and then we were having a similar conversation about what’s Right / Good / Moral, then we would also wind up landing on fairness, justice, beauty, and/or honesty or whatever.”
As I read your comments, I get the (perhaps unfair?) impression that
(1) From your perspective: this claim is so transparently ridiculous that the term “moral realism” couldn’t possibly refer to that, because after all “moral realism” is treated as a serious possibility in academic philosophy, whereas nobody would be so stupid as to believe that claim. (Apparently nostalgebraist believes that claim, based on his “flag” discussion, but so much the worse for him.)
(2) From your perspective: the only two possible options for ethical theories are hedonistic utilitarianism and preference utilitarianism (and variations thereof).
Anyway, I think I keep trying to argue against that claim, but you keep assuming I must be arguing against something else instead, because it wouldn’t be worth my time to argue against something so stupid.
To be clear, yes I think the claim is wrong. But I strongly disagree that no one serious believes it. See for example this essay, which also takes the position that the claim is wrong, but makes it clear that many respected philosophers would in fact endorse that claim. I think most philosophers who describe themselves as moral realists would endorse that claim.
I’m obviously putting words in your mouth, feel free to clarify.
I’m not sure what exactly you mean with “landing on”, but I do indeed think that the concept of goodness is a fairly general and natural or broadly useful concept that many different intelligent species would naturally converge to introduce in their languages. Presumably some distinct human languages have introduced that concept independently as well. Goodness seem to be a generalization of the concept of altruism, which is, along with egoism, arguably also a very natural concept. Alternatively one could see ethics (morality) as a generalization of the concept of instrumental rationality (maximization of the sum/average of all utility functions rather than of one), which seems to be quite natural itself.
But if you mean with “landing on” that different intelligent species would be equally motivated to be ethical in various respects, then that seems very unlikely. Intelligent animals living in social groups would likely care much more about other individuals than mostly solitary animals like octopuses. Also the natural group size matters. Humans care about themselves and immediate family members much more than about distant relatives, and even less about people with a very foreign language / culture / ethnicity.
the only two possible options for ethical theories are hedonistic utilitarianism and preference utilitarianism (and variations thereof).
There are many variants of these, and those cover basically all types of utilitarianism. Utilitarianism has so many facets that most plausible ethical theories (like deontology or contractualism) can probably be rephrased in roughly utilitarian terms. So I wouldn’t count that as a major restriction.
This is tangential to the point of the post, but “moral realism” is a much weaker claim than you seem to think. Moral realism only means that some moral claims are literally true. Popular uncontroversial examples: “torturing babies for fun is wrong” or “ceteris paribus, suffering is bad”. It doesn’t mean that someone is necessarily motivated by those claims if they believe they are true. It doesn’t imply that anyone is motivated to be good just from believing that something is good. A psychopath can agree “yes, doing X is wrong, but I don’t care about ethics” and shrug his shoulders. Moral realism doesn’t require a necessary connection between beliefs and desires; it is compatible with the (weak) orthogonality thesis.
You need to define what you mean by good for this to make sense. If good means “what you should do” then it’s exactly the big claim Steve is arguing against. If it means something else, what is it?
I do have ideas about what people mean by “good” other than “what you should do”, but they’re complex. That’s why I think you need to define the term more for this claim to make sense.
If Steve is saying that the moral facts need to be intrinsically motivating, that is a stronger claim than “the good is what you should do”, ie, it is the claim that “the good is what you would do”. But, as cubefox points out, being intrinsically motivating isn’t part of moral realism as defined in the mainstream. (it is apparently part of moral realism as defined in LW, because of something EY said years ago). Also, since moral realism is metaethical claim, there is no need to specify the good at object level.
Once again, theories aren’t definitions.
People don’t all have to have the same moral theory. At the same time, there has to be a common semantic basis for disagreement, rather than talking past, to take place. “The good is what you should do” is pretty reasonable as a shared definition, since it is hard to dispute, but also neutral between “the good” being define personally, tribally, or universally.
Good points. I think the term moral realism is probably used in a variety of ways in the public sphere. I think the relevant sense is “will alignment solve itself because a smart machine will decide to behave in a way we like”. If there’s some vague sense of stuff everyone “should” do, but it doesn’t make them actually do it, then it doesn’t matter for this purpose.
I was (and have been) making a theory about definitions.
I think “the good is what you should do” is remarkably devoid of useful meaning. People often mean very little by “should”, are unclear both to others and themselves, and use it in different ways in different situations.
My theory is that “good” is usually defined as an emotion, not another set of words, and that emotion roughly means “I want that person on my team” (when applied to behavior), because evolution engineered us to find useful teammates, and that feeling is its mechanism for doing so.
For understanding human ethics, the important thing is that it grounds out in punishments and rewards—the good is what you should do , and if you don’t do it, you face punishment. Another thing that means is that a theory of ethics must be sufficient to justify putting people in jail. But a definition is not a theory.
If your whole theory of ethics is to rubber stamp emotions or opinions, you end up with a very superficial theory that is open to objections like the Open Question argument. Just because somebody feels it is good to do X does not mean it was necessarily is—it is an open question. If the good is your emotions , then it is a closed question...your emotions are your emotions , likewise your values are your values, and your opinions are your opinions. The openness of the question “you feel that X is good, but is it really?” is a *theoretical” reason for believing that “goodness” works more like “truth” and less like “belief”.
(And the OQA is quite likely what this passage by Nostalgebraist hints at:-
*Who shoots down the enemy soldiers while thinking, “if I had been born there, it would have been all-important for their side to win, and so I would have shot at the men on this side. However, I was born in my country, not theirs, and so it is all-important that my country should win, and that theirs should lose.
There is no reason for this. It could have been the other way around, and everything would be left exactly the same, except for the ‘values.’
I cannot argue with the enemy, for there is no argument in my favor. I can only shoot them down.)
And having gathered our team to fight the other team, we can ask ourselves whether we might actually be the baddies.
The *practical* objection kicks in when there are conflicts between subjective views.
A theory of ethics needs to justify real world actions—especially actions that impact other people , especially actions that impact other people negatively.( It’s not just about passively understanding the world, about ‘what anticipated experiences come about from the belief that something is “good” or “bad”?’)Why should someone really go to jail ,if they havent really done anything wrong? Well, if the good is what you should do, jailing people is justifiable , because the kind of ting you shouldn’t do is the kind of thing you deserve punishment for.
Of course, the open question argument doesn’t take you all the way to full strength moral realism. Less obviously, there are many alternatives to MR. Nihilism is one: you can’t argue that emotivism is true because MR is false—emotivism might be wrong because ethics is nothing. Emotivism might also be wrong because some position weaker than MR is right.
I don’t think anyone needs to define what words used in ordinary language mean because the validity of any attempt of such a definition would itself have to be checked against the intuitive meaning of the word in common usage.
I do think the meaning is indeed similar (except of supererogatory statements), but the argument isn’t affected. For example, I can believe that I shouldn’t eat meat, or that eating meat is bad, without being motivated to stop eating meat.
I have no idea what you mean by your claim if you won’t define the central term. Or I do, but I’m just guessing. I think people are typically very vague in what they mean by “good”, so it’s not adequate for analytical discussion. In this case, a vague sense of good produces only a vague sense in which “moral realism” isn’t a strong claim. I just don’t know what you mean by that.
I’d be happy to come back later and give my guesses at what people tend to mean by “good”; it’s something like “stuff people do whom I want on my team” or “actions that make me feel positively toward someone”. But it would require a lot more words to even start nailing down. And while that’s a claim about reality, it’s quite a complex, dependent, and therefore vague claim, so I’d be reluctant to call it moral realism. Although it is in one sense. So maybe that’s what you mean?
Almost all terms in natural language are vague, but that doesn’t mean they are all ambiguous or somehow defective and in need of an explicit definition. We know what words mean, we can give examples, but we don’t have definitions in our mind. Imagine you say that believing X is irrational, and I reply “I don’t believe in ‘rational realism’, I think ‘rational’ is a vague term, can you give me a definition of ‘rational’ please?” That would be absurd. Of course I know what rational means, I just can’t define it, but we humans can hardly define any natural language terms at all.
That would indeed not count as moral realism, the form of anti-realism would probably be something similar to subjectivism (“x is good” ≈ “I like X”) or expressivism (“x is good” ≈ “Yay x!”).
But I don’t think this can make reasonable sense of beliefs. That I believe something is good doesn’t mean that I feel positive toward myself, or that I like it, or that I’m cheering for myself, or that I’m booing my past self if I changed my mind. Sometimes I may also just wonder whether something is good or bad (e.g. eating meat) which arguably makes no sense under those interpretations.
I don’t think I could disagree any more strongly about this. In fact, I am kind of confused about your choice of example, because ‘rationality’ seems to me like such a clear counter to your argument. It is precisely the type of slippery concept that is portrayed inaccurately (relative to LW terminology) in mainstream culture and thus inherently requires a more rigorous definition and explanation. This was so important that “the best intro-to-rationality for the general public” (according to @lukeprog) specifically addressed the common misconception that being rational means being a Spock-like Straw Vulcan. It was so important that one of the crucial posts in the first Sequence by Eliezer spends almost 2000 words defining rationality. So important that, 14 years later, @Raemon had to write yet another post (with 150 upvotes) explaining what rationality is not, as a result of common and lasting confusions by users on this very site (presumably coming about as a result of the original posts not clarifying matters sufficiently).
What about the excellent and important post “Realism about Rationality” by Richard Ngo, which expresses “skepticism” about the mindset he calls “realism about rationality,” thus disagreeing with others who do think “this broad mindset is mostly correct, and the objections outlined in this essay are mostly wrong” and argue that “we should expect a clean mathematical theory of rationality and intelligence to exist”? Do you “of course know what rationality means” if you cannot settle as important a question as this? What about Bryan Caplan’s arguments that a drug addict who claims they want to stop buying drugs but can’t prevent themselves from doing so is actually acting perfectly rationally, because, in reality, their revealed preferences show that they really do want to consume drugs, and are thus rationally pursuing those goals by buying them? Caplan is a smart person expressing serious disagreement with the mainstream, intuitive perceptions of rationality and human desires; this strongly suggests that rationality is indeed, as you put it, “ambiguous or somehow defective and in need of an explicit definition.”
It wouldn’t be wrong to say that LessWrong was built to advance the study of rationality, both as it relates to humans and to AI. The very basis of this site and of the many Sequences and posts expanding upon these ideas is the notion that our understanding of rationality is currently inadequate and needs to be straightened out.
What anticipated experiences come about from the belief that something is “good” or “bad”? This is the critical question, which I have not seen a satisfactory answer to by moral realists (Eliezer himself does have an answer to this on the basis of CEV, but that is a longer discussion for another time). And if there is no answer, then the concept of “moral facts” becomes essentially useless, like any other belief that pays no rent.
A long time ago, @Roko laid out a possible thesis of “strong moral realism” that “All (or perhaps just almost all) beings, human, alien or AI, when given sufficient computing power and the ability to learn science and get an accurate map-territory morphism, will agree on what physical state the universe ought to be transformed into, and therefore they will assist you in transforming it into this state.” He also correctly noted that “most modern philosophers who call themselves “realists” don’t mean anything nearly this strong. They mean that that there are moral “facts”, for varying definitions of “fact” that typically fade away into meaninglessness on closer examination, and actually make the same empirical predictions as antirealism.” Roko’s post lays out clear anticipated experiences coming about from this version of moral realism; it is falsifiable, and most importantly, it is about reality because it constrains reality, if true (but, as it strongly conflicts with the Orthogonality Thesis, the vast majority of users here would strongly disbelieve is true). Something like what Roko illustrated is necessary to answer the critiques of moral anti-realists like @Steven Byrnes, who are implicitly saying that reality is not at all constrained to any system of (human-intelligible) morality.
There is a large difference between knowing the meaning of a word, and knowing its definition. You know perfectly well how to use ordinary words like “knowledge” or “game”, in that sense you understand what they mean, yet you almost certainly don’t know an adequate (necessary and sufficient) definition for them, i.e. one that doesn’t suffer from counterexamples. In philosophy those are somewhat famous cases of words that are hard to define, but most words from natural language could be chosen instead.
That’s not to say that definition is useless, but it’s not something we need when evaluating most object level questions. Answering “Do you know where I left my keys?” doesn’t require a definition for “knowledge”. Answering “Is believing in ghosts irrational?” doesn’t require a definition of “rationality”. And answering “Is eating Bob’s lunch bad?” doesn’t require a definition of “bad”.
Attempts of finding such definitions is called philosophy, or conceptual analysis specifically. It helps with abstract reasoning by finding relations between concepts. For example, when asked explicitly, most people can’t say how knowledge and belief relate to each other (I tried). Philosophers would reply that knowledge implies belief but not the other way round, or that belief is internal while knowledge is (partly) external. In some cases knowing this is kind of important, but usually it isn’t.
Well, why not try to answer it yourself? I’d say evidence for something being “good” is approximately when we can expect that it increases general welfare, like people being happy or able to do what they want. I directionally agree with EY’s extrapolated volition explication of goodness (I linked to it in a neighboring comment). As he mentions, there are several philosophers who have provided similar analyses.
It is interesting that you chose the example of “knowledge” because I think that is yet another illustration of the complete opposite of the position you are arguing for. I was not born with an intuitive understanding of Bayesianism, for example. However, I now consider anyone who hasn’t grasped Bayesian thinking (such as previous versions of me) but is nonetheless trying to seriously reason about what it means to know something to be terribly confused and to have a low likelihood of achieving anything meaningful in any non-intuitive context where formalizing/using precise meanings of knowledge is necessary. I would thus say that the vast majority of people who use ordinary words like “knowledge” don’t understand what they mean (or, to be more precise, they don’t understand the concepts that result from carving reality at its joints in a coherent manner).
I don’t care about definitions per se. The vast majority of human concepts and mental categories don’t work on the basis of necessary and sufficient conditions anyway, so an inability to supply a fully generalizable definition for something is caused much more by the fundamental failings of our inadequate language than by issues with our conceptual formation. Nevertheless, informal and non-rigorous thinking about concepts can easily lead into confusion and the reification of ultimately nonsensical ideas if they are not subject to enough critical analysis in the process.
Given my previous paragraph, I don’t think you would be surprised to hear that I find conceptual analysis to be virtually useless and a waste of resources, for basically the reasons laid out in detail by @lukeprog in “Concepts Don’t Work That Way” and “Intuitions Aren’t Shared That Way” almost 12 years ago. His (in my view incomplete) sequence on Rationality and Philosophy is as much a part of LW’s core as Eliezer’s own Sequences are, so while reasonable disagreement with it is certainly possible, I start with a very strong prior that it is correct, for purposes of our discussion.
Well, I have tried to answer it myself, and after thinking about it very seriously and reading what people on all sides of the issue have thought about it, I have come to the conclusion that concepts of “moral truth” are inherently confused, pay no rent in anticipated experiences, and are based upon flaws in thinking that reveal how common-sensical intuitions are totally unmoored from reality when you get down to the nitty-gritty of it. Nevertheless, given the importance of this topic, I am certainly willing to change my mind if presented with evidence.
That might well be evidence (in the Bayesian sense) that a given act, value, or person belongs to a certain category which we slap the label “good” onto. But it has little to do with my initial question. We have no reason to care about the property of “goodness” at all if we do not believe that knowing something is “good” gives us powerful evidence that allows us to anticipate experiences and to constrain the territory around us. Otherwise, “goodness” is just an arbitrary bag of things that is no more useful than the category of “bleggs” that is generated for no coherent reason whatsoever, or the random category “r398t”s that I just made up and contains only apples, weasels, and Ron Weasley. Indeed, we would not even have enough reason to raise the question of what “goodness” is in the first place.
To take a simple illustration of the difference between the conditions for membership in a category and the anticipated experiences resulting from “knowing” that something is a member of that category, consider groups in mathematics. The definition of a group is “a set together with a binary operation that satisfies the axioms of associativity, identity, and inverses.” But we don’t care about groups for reasons that deal only with these axioms; on the contrary, groups matter because they help model important situations in reality (such as symmetry groups in physics) and because we can tell a lot about the nature and structure of groups through mathematical reasoning. The fact that finite simple groups can be classified in a clear and concise manner is a consequence of their definition (not a formal precondition for their membership) and allows us to anticipate with extremely high (although not full) certainty that if we consider a finite simple group G, it will be isomorphic to one of the sample groups in the description above.
I don’t understand your point about anticipated experience. If I believe some action is good, I anticipate that doing that action will produce evidence (experience) that is indicative of increased welfare. That is exactly not like believing something to be “blegg”. Regarding mathematical groups, whether or not we care about them for their usefulness in physics seems not relevant for “group” to have a specific meaning. Like, you may not care about horses, but you still anticipate a certain visual experience when someone tells you they bought you a horse, it’s right outside. And for a group you’d anticipate that it turns out to satisfy associativity etc.
Yeah I oversimplified. :) I think “the literal definition of moral realism” is a bit different from “the important substantive things that people are usually talking about when they talk about moral realism”, and I was pointing at the latter instead of the former. For example:
It’s possible to believe: “there is a moral truth, but it exists in another realm to which we have no epistemic access or connection. For all we know, the true moral imperative is to maximize helium. We can never possibly know one way or the other, so this moral truth is entirely irrelevant to our actions and decisions. Tough situation we find ourselves in!”
See: The ignorance of normative realism bot.
This position is literally moral realism, but in practice this person will be hanging out with the moral antirealists (and nihilists!) when deciding what to do with their lives and why.
It’s possible to believe: “there is a moral truth, and it is inextricably bound up with entirely contingent (“random”) facts about the human brain and its innate drives. For example, maybe it turns out that “justice is part of true morality”, but if the African Savanna had had a different set of predators, then maybe we would be a slightly different but equally intelligent species having an analogous discussion, and we would be saying “justice is not part of true morality”, and nobody in this story has made any mistake in their logic. Rather, we are humans, and “morality” is our human word, so it’s fine if there’s contingent-properties-of-human-brains underlying what that word points to.”
I believe Eliezer would put himself in this camp, see my summary here.
Again, this position is literally moral realism, but it has no substantive difference whatsoever from a typical moral antirealism position. The difference is purely semantics / terminological choices. Just replace “true morality” with “true moralityhuman species” and so on. Again see here for details.
Anyway, my strong impression that a central property of moral realist claims—the thing that makes those claims substantively different from moral antirealism, in a way that feeds into pondering different things and making different decisions, the thing that most self-described moral realists actually believe, as opposed to the trivialities above—is that moral statements can be not just true but also that their truth is “universally accessible to reason and reflection” in a sense. That’s what you need for nostalgebraist’s attempted reductio ad absurdum (where he says: if I had been born in the other country, I would be holding their flag, etc.) to not apply. So that’s what I was trying to talk about. Sorry for leaving out these nuances. If there’s a better terminology for what I’m talking about, I’d be interested to hear it. :)
Eliezer has a more recent metaethical theory (basically “x is good” = “x increases extrapolated volition”) which is moral realist in a conventional way. He discusses it here. It’s approximately a form of idealized–preference utilitarianism.
Well, the truth of something being “universally accessible to reason and reflection” would still just result in a belief, which is (per weak orthogonality) different in principle from a desire. And a desire would be needed for the reductio, otherwise we have just a psychopath AI that understands ethics perfectly well but doesn’t care about it.
I don’t think that’s “moral realist in a conventional way”, and I don’t think it’s in contradiction with my second bullet in the comment above. Different species have different “extrapolated volition”, right? I think that link is “a moral realist theory which is only trivially different from a typical moral antirealist theory”. Just go through Eliezer’s essay and do a global-find-and-replace of “extrapolated volition” with “extrapolated volitionhuman species”, and “good” with “goodhuman species”, etc., and bam, now it’s a central example of a moral antirealist theory. You could not do the same with, say, metaethical hedonism without sucking all the force out of it—the whole point of metaethical hedonism is that it has some claim to naturalness and universality, and does not depend on contingent facts about life in the African Savanna. When I think of “moral realist in a conventional way”, I think of things like metaethical hedonism, right?
Well, Eliezer doesn’t explicitly restrict his theory to humans as far as I can tell. More generally, forms of utilitarianism (be it hedonic or preference oriented or some mixture) aren’t a priori restricted to any species. The point is also that some sort of utility is treated as an input to the theory, not a part of the theory. That’s no different between well-being (hedonic utilitarianism) or preferences. I’m not sure why you seem to think so. The African Savanna influenced what sort of things we enjoy or want, but these specifics don’t matter for general theories like utilitarianism or extrapolated volition. Ethics recommends general things like making individuals happy or satisfying their (extrapolated) desires, but ethics doesn’t recommend giving them, for example, specifically chocolate, just because they happen like (want/enjoy) chocolate for contingent reasons.
Ethics, at least according to utilitarianism, is about maximizing some sort of aggregate utility. E.g. justice isn’t just a thing humans happen to like. They refer to the aforementioned aggregate which doesn’t favor one individual over another. So while chocolate isn’t part of ethics, fairness is. An analysis of “x is good” as “x maximizes the utility of Bob specifically” wouldn’t capture the meaning of the term.
Let’s consider:
Claim: “Certain things—like maybe fairness, justice, beauty, and/or honesty—are Right / Good / Moral (and conversely, certain things like causing-suffering are Wrong / Bad) for reasons that don’t at all flow through contingent details of the African Savanna applying specific evolutionary pressures to our innate drives. In other words, if hominids had a different evolutionary niche in the African Savanna, and then we were having a similar conversation about what’s Right / Good / Moral, then we would also wind up landing on fairness, justice, beauty, and/or honesty or whatever.”
As I read your comments, I get the (perhaps unfair?) impression that
(1) From your perspective: this claim is so transparently ridiculous that the term “moral realism” couldn’t possibly refer to that, because after all “moral realism” is treated as a serious possibility in academic philosophy, whereas nobody would be so stupid as to believe that claim. (Apparently nostalgebraist believes that claim, based on his “flag” discussion, but so much the worse for him.)
(2) From your perspective: the only two possible options for ethical theories are hedonistic utilitarianism and preference utilitarianism (and variations thereof).
Anyway, I think I keep trying to argue against that claim, but you keep assuming I must be arguing against something else instead, because it wouldn’t be worth my time to argue against something so stupid.
To be clear, yes I think the claim is wrong. But I strongly disagree that no one serious believes it. See for example this essay, which also takes the position that the claim is wrong, but makes it clear that many respected philosophers would in fact endorse that claim. I think most philosophers who describe themselves as moral realists would endorse that claim.
I’m obviously putting words in your mouth, feel free to clarify.
I’m not sure what exactly you mean with “landing on”, but I do indeed think that the concept of goodness is a fairly general and natural or broadly useful concept that many different intelligent species would naturally converge to introduce in their languages. Presumably some distinct human languages have introduced that concept independently as well. Goodness seem to be a generalization of the concept of altruism, which is, along with egoism, arguably also a very natural concept. Alternatively one could see ethics (morality) as a generalization of the concept of instrumental rationality (maximization of the sum/average of all utility functions rather than of one), which seems to be quite natural itself.
But if you mean with “landing on” that different intelligent species would be equally motivated to be ethical in various respects, then that seems very unlikely. Intelligent animals living in social groups would likely care much more about other individuals than mostly solitary animals like octopuses. Also the natural group size matters. Humans care about themselves and immediate family members much more than about distant relatives, and even less about people with a very foreign language / culture / ethnicity.
There are many variants of these, and those cover basically all types of utilitarianism. Utilitarianism has so many facets that most plausible ethical theories (like deontology or contractualism) can probably be rephrased in roughly utilitarian terms. So I wouldn’t count that as a major restriction.