Well, Eliezer doesn’t explicitly restrict his theory to humans as far as I can tell. More generally, forms of utilitarianism (be it hedonic or preference oriented or some mixture) aren’t a priori restricted to any species. The point is also that some sort of utility is treated as an input to the theory, not a part of the theory. That’s no different between well-being (hedonic utilitarianism) or preferences. I’m not sure why you seem to think so. The African Savanna influenced what sort of things we enjoy or want, but these specifics don’t matter for general theories like utilitarianism or extrapolated volition. Ethics recommends general things like making individuals happy or satisfying their (extrapolated) desires, but ethics doesn’t recommend giving them, for example, specifically chocolate, just because they happen like (want/enjoy) chocolate for contingent reasons.
Ethics, at least according to utilitarianism, is about maximizing some sort of aggregate utility. E.g. justice isn’t just a thing humans happen to like. They refer to the aforementioned aggregate which doesn’t favor one individual over another. So while chocolate isn’t part of ethics, fairness is. An analysis of “x is good” as “x maximizes the utility of Bob specifically” wouldn’t capture the meaning of the term.
Claim:“Certain things—like maybe fairness, justice, beauty, and/or honesty—are Right / Good / Moral (and conversely, certain things like causing-suffering are Wrong / Bad) for reasons that don’t at all flow through contingent details of the African Savanna applying specific evolutionary pressures to our innate drives. In other words, if hominids had a different evolutionary niche in the African Savanna, and then we were having a similar conversation about what’s Right / Good / Moral, then we would also wind up landing on fairness, justice, beauty, and/or honesty or whatever.”
As I read your comments, I get the (perhaps unfair?) impression that
(1) From your perspective: this claim is so transparently ridiculous that the term “moral realism” couldn’t possibly refer to that, because after all “moral realism” is treated as a serious possibility in academic philosophy, whereas nobody would be so stupid as to believe that claim. (Apparently nostalgebraist believes that claim, based on his “flag” discussion, but so much the worse for him.)
(2) From your perspective: the only two possible options for ethical theories are hedonistic utilitarianism and preference utilitarianism (and variations thereof).
Anyway, I think I keep trying to argue against that claim, but you keep assuming I must be arguing against something else instead, because it wouldn’t be worth my time to argue against something so stupid.
To be clear, yes I think the claim is wrong. But I strongly disagree that no one serious believes it. See for example this essay, which also takes the position that the claim is wrong, but makes it clear that many respected philosophers would in fact endorse that claim. I think most philosophers who describe themselves as moral realists would endorse that claim.
I’m obviously putting words in your mouth, feel free to clarify.
I’m not sure what exactly you mean with “landing on”, but I do indeed think that the concept of goodness is a fairly general and natural or broadly useful concept that many different intelligent species would naturally converge to introduce in their languages. Presumably some distinct human languages have introduced that concept independently as well. Goodness seem to be a generalization of the concept of altruism, which is, along with egoism, arguably also a very natural concept. Alternatively one could see ethics (morality) as a generalization of the concept of instrumental rationality (maximization of the sum/average of all utility functions rather than of one), which seems to be quite natural itself.
But if you mean with “landing on” that different intelligent species would be equally motivated to be ethical in various respects, then that seems very unlikely. Intelligent animals living in social groups would likely care much more about other individuals than mostly solitary animals like octopuses. Also the natural group size matters. Humans care about themselves and immediate family members much more than about distant relatives, and even less about people with a very foreign language / culture / ethnicity.
the only two possible options for ethical theories are hedonistic utilitarianism and preference utilitarianism (and variations thereof).
There are many variants of these, and those cover basically all types of utilitarianism. Utilitarianism has so many facets that most plausible ethical theories (like deontology or contractualism) can probably be rephrased in roughly utilitarian terms. So I wouldn’t count that as a major restriction.
Well, Eliezer doesn’t explicitly restrict his theory to humans as far as I can tell. More generally, forms of utilitarianism (be it hedonic or preference oriented or some mixture) aren’t a priori restricted to any species. The point is also that some sort of utility is treated as an input to the theory, not a part of the theory. That’s no different between well-being (hedonic utilitarianism) or preferences. I’m not sure why you seem to think so. The African Savanna influenced what sort of things we enjoy or want, but these specifics don’t matter for general theories like utilitarianism or extrapolated volition. Ethics recommends general things like making individuals happy or satisfying their (extrapolated) desires, but ethics doesn’t recommend giving them, for example, specifically chocolate, just because they happen like (want/enjoy) chocolate for contingent reasons.
Ethics, at least according to utilitarianism, is about maximizing some sort of aggregate utility. E.g. justice isn’t just a thing humans happen to like. They refer to the aforementioned aggregate which doesn’t favor one individual over another. So while chocolate isn’t part of ethics, fairness is. An analysis of “x is good” as “x maximizes the utility of Bob specifically” wouldn’t capture the meaning of the term.
Let’s consider:
Claim: “Certain things—like maybe fairness, justice, beauty, and/or honesty—are Right / Good / Moral (and conversely, certain things like causing-suffering are Wrong / Bad) for reasons that don’t at all flow through contingent details of the African Savanna applying specific evolutionary pressures to our innate drives. In other words, if hominids had a different evolutionary niche in the African Savanna, and then we were having a similar conversation about what’s Right / Good / Moral, then we would also wind up landing on fairness, justice, beauty, and/or honesty or whatever.”
As I read your comments, I get the (perhaps unfair?) impression that
(1) From your perspective: this claim is so transparently ridiculous that the term “moral realism” couldn’t possibly refer to that, because after all “moral realism” is treated as a serious possibility in academic philosophy, whereas nobody would be so stupid as to believe that claim. (Apparently nostalgebraist believes that claim, based on his “flag” discussion, but so much the worse for him.)
(2) From your perspective: the only two possible options for ethical theories are hedonistic utilitarianism and preference utilitarianism (and variations thereof).
Anyway, I think I keep trying to argue against that claim, but you keep assuming I must be arguing against something else instead, because it wouldn’t be worth my time to argue against something so stupid.
To be clear, yes I think the claim is wrong. But I strongly disagree that no one serious believes it. See for example this essay, which also takes the position that the claim is wrong, but makes it clear that many respected philosophers would in fact endorse that claim. I think most philosophers who describe themselves as moral realists would endorse that claim.
I’m obviously putting words in your mouth, feel free to clarify.
I’m not sure what exactly you mean with “landing on”, but I do indeed think that the concept of goodness is a fairly general and natural or broadly useful concept that many different intelligent species would naturally converge to introduce in their languages. Presumably some distinct human languages have introduced that concept independently as well. Goodness seem to be a generalization of the concept of altruism, which is, along with egoism, arguably also a very natural concept. Alternatively one could see ethics (morality) as a generalization of the concept of instrumental rationality (maximization of the sum/average of all utility functions rather than of one), which seems to be quite natural itself.
But if you mean with “landing on” that different intelligent species would be equally motivated to be ethical in various respects, then that seems very unlikely. Intelligent animals living in social groups would likely care much more about other individuals than mostly solitary animals like octopuses. Also the natural group size matters. Humans care about themselves and immediate family members much more than about distant relatives, and even less about people with a very foreign language / culture / ethnicity.
There are many variants of these, and those cover basically all types of utilitarianism. Utilitarianism has so many facets that most plausible ethical theories (like deontology or contractualism) can probably be rephrased in roughly utilitarian terms. So I wouldn’t count that as a major restriction.