What irritates me about this post is that Yudkowsky just seems to assume without questioning (at least not in that article and related ones) that we ought be concerned about human morality. In “Fake Utility Functions”, he argues that hedonistic utilitarianism fails to due justice to all the complex human values . But that’s not the goal utiltiarians wanted to achieve, that’s not their view of ethics. Ethics should be independent of the evolutionary psychology of Homo sapiens. Self-aware beings could have ended up with different values. What are the meta criteria by which we should decide what values to have in the first place? Hedonistic utilitarians answer that what matters, ultimately, can only be conscious experience. Yudkowsky seemed to assume that hedonistic utiltiarians thought that humans must want to be hedonistic utiltiarians deep down. But they don’t need that to be the case at all. Human ethical intuitions could well be more misguided than Yudkowsky acknowledges anyway (i.e. that many people have strong intuitions against some of the consequences of consequentialism). Yudkowsky’s dismissal of the One Great Moral Principle thus seems hastened. Toby Ord made a similar point in the comments to “Fake Utility Functions”.
(I don’t want to advocate classical utilitarianism here because I think there are reasons that speak against happiness being the relevant criterion, I just wanted to point out that more thought should be given to this foundational issue of ethics.)
What irritates me about this post is that Yudkowsky just seems to assume without questioning (at least not in that article and related ones) that we ought be concerned about human morality. In “Fake Utility Functions”, he argues that hedonistic utilitarianism fails to due justice to all the complex human values . But that’s not the goal utiltiarians wanted to achieve, that’s not their view of ethics. Ethics should be independent of the evolutionary psychology of Homo sapiens. Self-aware beings could have ended up with different values. What are the meta criteria by which we should decide what values to have in the first place? Hedonistic utilitarians answer that what matters, ultimately, can only be conscious experience. Yudkowsky seemed to assume that hedonistic utiltiarians thought that humans must want to be hedonistic utiltiarians deep down. But they don’t need that to be the case at all. Human ethical intuitions could well be more misguided than Yudkowsky acknowledges anyway (i.e. that many people have strong intuitions against some of the consequences of consequentialism). Yudkowsky’s dismissal of the One Great Moral Principle thus seems hastened. Toby Ord made a similar point in the comments to “Fake Utility Functions”. (I don’t want to advocate classical utilitarianism here because I think there are reasons that speak against happiness being the relevant criterion, I just wanted to point out that more thought should be given to this foundational issue of ethics.)