Eliezer has a more recent metaethical theory (basically “x is good” = “x increases extrapolated volition”) which is moral realist in a conventional way. He discusses it here. It’s approximately a form of idealized–preference utilitarianism.
the thing that most self-described moral realists actually believe, as opposed to the trivialities above—is that moral statements can be not just true but also that their truth is “universally accessible to reason and reflection” in a sense. That’s what you need for nostalgebraist’s attempted reductio ad absurdum
Well, the truth of something being “universally accessible to reason and reflection” would still just result in a belief, which is (per weak orthogonality) different in principle from a desire. And a desire would be needed for the reductio, otherwise we have just a psychopath AI that understands ethics perfectly well but doesn’t care about it.
Eliezer has a more recent metaethical theory (basically “x is good” = “x increases extrapolated volition”) which is moral realist in a conventional way. He discusses it here.
I don’t think that’s “moral realist in a conventional way”, and I don’t think it’s in contradiction with my second bullet in the comment above. Different species have different “extrapolated volition”, right? I think that link is “a moral realist theory which is only trivially different from a typical moral antirealist theory”. Just go through Eliezer’s essay and do a global-find-and-replace of “extrapolated volition” with “extrapolated volitionhuman species”, and “good” with “goodhuman species”, etc., and bam, now it’s a central example of a moral antirealist theory. You could not do the same with, say, metaethical hedonism without sucking all the force out of it—the whole point of metaethical hedonism is that it has some claim to naturalness and universality, and does not depend on contingent facts about life in the African Savanna. When I think of “moral realist in a conventional way”, I think of things like metaethical hedonism, right?
Well, Eliezer doesn’t explicitly restrict his theory to humans as far as I can tell. More generally, forms of utilitarianism (be it hedonic or preference oriented or some mixture) aren’t a priori restricted to any species. The point is also that some sort of utility is treated as an input to the theory, not a part of the theory. That’s no different between well-being (hedonic utilitarianism) or preferences. I’m not sure why you seem to think so. The African Savanna influenced what sort of things we enjoy or want, but these specifics don’t matter for general theories like utilitarianism or extrapolated volition. Ethics recommends general things like making individuals happy or satisfying their (extrapolated) desires, but ethics doesn’t recommend giving them, for example, specifically chocolate, just because they happen like (want/enjoy) chocolate for contingent reasons.
Ethics, at least according to utilitarianism, is about maximizing some sort of aggregate utility. E.g. justice isn’t just a thing humans happen to like. They refer to the aforementioned aggregate which doesn’t favor one individual over another. So while chocolate isn’t part of ethics, fairness is. An analysis of “x is good” as “x maximizes the utility of Bob specifically” wouldn’t capture the meaning of the term.
Claim:“Certain things—like maybe fairness, justice, beauty, and/or honesty—are Right / Good / Moral (and conversely, certain things like causing-suffering are Wrong / Bad) for reasons that don’t at all flow through contingent details of the African Savanna applying specific evolutionary pressures to our innate drives. In other words, if hominids had a different evolutionary niche in the African Savanna, and then we were having a similar conversation about what’s Right / Good / Moral, then we would also wind up landing on fairness, justice, beauty, and/or honesty or whatever.”
As I read your comments, I get the (perhaps unfair?) impression that
(1) From your perspective: this claim is so transparently ridiculous that the term “moral realism” couldn’t possibly refer to that, because after all “moral realism” is treated as a serious possibility in academic philosophy, whereas nobody would be so stupid as to believe that claim. (Apparently nostalgebraist believes that claim, based on his “flag” discussion, but so much the worse for him.)
(2) From your perspective: the only two possible options for ethical theories are hedonistic utilitarianism and preference utilitarianism (and variations thereof).
Anyway, I think I keep trying to argue against that claim, but you keep assuming I must be arguing against something else instead, because it wouldn’t be worth my time to argue against something so stupid.
To be clear, yes I think the claim is wrong. But I strongly disagree that no one serious believes it. See for example this essay, which also takes the position that the claim is wrong, but makes it clear that many respected philosophers would in fact endorse that claim. I think most philosophers who describe themselves as moral realists would endorse that claim.
I’m obviously putting words in your mouth, feel free to clarify.
I’m not sure what exactly you mean with “landing on”, but I do indeed think that the concept of goodness is a fairly general and natural or broadly useful concept that many different intelligent species would naturally converge to introduce in their languages. Presumably some distinct human languages have introduced that concept independently as well. Goodness seem to be a generalization of the concept of altruism, which is, along with egoism, arguably also a very natural concept. Alternatively one could see ethics (morality) as a generalization of the concept of instrumental rationality (maximization of the sum/average of all utility functions rather than of one), which seems to be quite natural itself.
But if you mean with “landing on” that different intelligent species would be equally motivated to be ethical in various respects, then that seems very unlikely. Intelligent animals living in social groups would likely care much more about other individuals than mostly solitary animals like octopuses. Also the natural group size matters. Humans care about themselves and immediate family members much more than about distant relatives, and even less about people with a very foreign language / culture / ethnicity.
the only two possible options for ethical theories are hedonistic utilitarianism and preference utilitarianism (and variations thereof).
There are many variants of these, and those cover basically all types of utilitarianism. Utilitarianism has so many facets that most plausible ethical theories (like deontology or contractualism) can probably be rephrased in roughly utilitarian terms. So I wouldn’t count that as a major restriction.
Eliezer has a more recent metaethical theory (basically “x is good” = “x increases extrapolated volition”) which is moral realist in a conventional way. He discusses it here. It’s approximately a form of idealized–preference utilitarianism.
Well, the truth of something being “universally accessible to reason and reflection” would still just result in a belief, which is (per weak orthogonality) different in principle from a desire. And a desire would be needed for the reductio, otherwise we have just a psychopath AI that understands ethics perfectly well but doesn’t care about it.
I don’t think that’s “moral realist in a conventional way”, and I don’t think it’s in contradiction with my second bullet in the comment above. Different species have different “extrapolated volition”, right? I think that link is “a moral realist theory which is only trivially different from a typical moral antirealist theory”. Just go through Eliezer’s essay and do a global-find-and-replace of “extrapolated volition” with “extrapolated volitionhuman species”, and “good” with “goodhuman species”, etc., and bam, now it’s a central example of a moral antirealist theory. You could not do the same with, say, metaethical hedonism without sucking all the force out of it—the whole point of metaethical hedonism is that it has some claim to naturalness and universality, and does not depend on contingent facts about life in the African Savanna. When I think of “moral realist in a conventional way”, I think of things like metaethical hedonism, right?
Well, Eliezer doesn’t explicitly restrict his theory to humans as far as I can tell. More generally, forms of utilitarianism (be it hedonic or preference oriented or some mixture) aren’t a priori restricted to any species. The point is also that some sort of utility is treated as an input to the theory, not a part of the theory. That’s no different between well-being (hedonic utilitarianism) or preferences. I’m not sure why you seem to think so. The African Savanna influenced what sort of things we enjoy or want, but these specifics don’t matter for general theories like utilitarianism or extrapolated volition. Ethics recommends general things like making individuals happy or satisfying their (extrapolated) desires, but ethics doesn’t recommend giving them, for example, specifically chocolate, just because they happen like (want/enjoy) chocolate for contingent reasons.
Ethics, at least according to utilitarianism, is about maximizing some sort of aggregate utility. E.g. justice isn’t just a thing humans happen to like. They refer to the aforementioned aggregate which doesn’t favor one individual over another. So while chocolate isn’t part of ethics, fairness is. An analysis of “x is good” as “x maximizes the utility of Bob specifically” wouldn’t capture the meaning of the term.
Let’s consider:
Claim: “Certain things—like maybe fairness, justice, beauty, and/or honesty—are Right / Good / Moral (and conversely, certain things like causing-suffering are Wrong / Bad) for reasons that don’t at all flow through contingent details of the African Savanna applying specific evolutionary pressures to our innate drives. In other words, if hominids had a different evolutionary niche in the African Savanna, and then we were having a similar conversation about what’s Right / Good / Moral, then we would also wind up landing on fairness, justice, beauty, and/or honesty or whatever.”
As I read your comments, I get the (perhaps unfair?) impression that
(1) From your perspective: this claim is so transparently ridiculous that the term “moral realism” couldn’t possibly refer to that, because after all “moral realism” is treated as a serious possibility in academic philosophy, whereas nobody would be so stupid as to believe that claim. (Apparently nostalgebraist believes that claim, based on his “flag” discussion, but so much the worse for him.)
(2) From your perspective: the only two possible options for ethical theories are hedonistic utilitarianism and preference utilitarianism (and variations thereof).
Anyway, I think I keep trying to argue against that claim, but you keep assuming I must be arguing against something else instead, because it wouldn’t be worth my time to argue against something so stupid.
To be clear, yes I think the claim is wrong. But I strongly disagree that no one serious believes it. See for example this essay, which also takes the position that the claim is wrong, but makes it clear that many respected philosophers would in fact endorse that claim. I think most philosophers who describe themselves as moral realists would endorse that claim.
I’m obviously putting words in your mouth, feel free to clarify.
I’m not sure what exactly you mean with “landing on”, but I do indeed think that the concept of goodness is a fairly general and natural or broadly useful concept that many different intelligent species would naturally converge to introduce in their languages. Presumably some distinct human languages have introduced that concept independently as well. Goodness seem to be a generalization of the concept of altruism, which is, along with egoism, arguably also a very natural concept. Alternatively one could see ethics (morality) as a generalization of the concept of instrumental rationality (maximization of the sum/average of all utility functions rather than of one), which seems to be quite natural itself.
But if you mean with “landing on” that different intelligent species would be equally motivated to be ethical in various respects, then that seems very unlikely. Intelligent animals living in social groups would likely care much more about other individuals than mostly solitary animals like octopuses. Also the natural group size matters. Humans care about themselves and immediate family members much more than about distant relatives, and even less about people with a very foreign language / culture / ethnicity.
There are many variants of these, and those cover basically all types of utilitarianism. Utilitarianism has so many facets that most plausible ethical theories (like deontology or contractualism) can probably be rephrased in roughly utilitarian terms. So I wouldn’t count that as a major restriction.