Population ethics is the most important area within utilitarianism, but utilitarian answers to population ethics are all wrong, so therefore utilitarianism is an incorrect moral theory.
You can’t weasel your way out by calling it an edge-case or saying that utilitarianism “usually” works when really it’s the most important moral question. Like all the other big-impact utilitarian conclusions derive from population ethics since they tend to be dependent on large populations of people.
Utilitarianism can at best be seen as like a Taylor expansion that’s valid only for questions whose impact on the total population are negligible.
The question of population ethics can be dissolved by rejecting personal identity realism. And we already have good reasons to reject personal identity realism, or at least consider it suspect, due to the paradoxes that arise in split-brain thought experiments (e.g., the hemisphere swap thought experiment) if you assume there’s a single correct way to assign personal identity.
This is kind of vague. Doesn’t this start shading into territory like “it’s technically not bad to kill a person if you also create another person”? Or am I misunderstanding what you are getting at?
Completely agree. It’s more like a utility function for a really weird inhuman kind of agent. That agent finds it obvious that if you had a chance to painlessly kill all humans and replace them with aliens who are 50% happier and 50% more numerous it would be a wonderful and exiting opportunity. Like, it’s hard to overstate how weird utilitarianism is. And this agent will find it really painful and regretful to be confined by strategic considerations of “the humans would fight you really hard, so you should promise not to do it”. Where as humans find it relieving? or something.
Utilitarianism, like many philosophical subjects, is not a finished theory but still undergoing active research. There is significant recent progress on the repugnant conclusion for example. See this EA Forum post by MichaelStJules. He also has other posts on cutting edge Utilitarianism research. I think many people on LW are not aware of this because they, at most, focus on rationality research but not ethics research.
Population ethics is the most important area within utilitarianism, but utilitarian answers to population ethics are all wrong, so therefore utilitarianism is an incorrect moral theory.
You can’t weasel your way out by calling it an edge-case or saying that utilitarianism “usually” works when really it’s the most important moral question. Like all the other big-impact utilitarian conclusions derive from population ethics since they tend to be dependent on large populations of people.
Utilitarianism can at best be seen as like a Taylor expansion that’s valid only for questions whose impact on the total population are negligible.
The question of population ethics can be dissolved by rejecting personal identity realism. And we already have good reasons to reject personal identity realism, or at least consider it suspect, due to the paradoxes that arise in split-brain thought experiments (e.g., the hemisphere swap thought experiment) if you assume there’s a single correct way to assign personal identity.
This is kind of vague. Doesn’t this start shading into territory like “it’s technically not bad to kill a person if you also create another person”? Or am I misunderstanding what you are getting at?
Completely agree. It’s more like a utility function for a really weird inhuman kind of agent. That agent finds it obvious that if you had a chance to painlessly kill all humans and replace them with aliens who are 50% happier and 50% more numerous it would be a wonderful and exiting opportunity. Like, it’s hard to overstate how weird utilitarianism is. And this agent will find it really painful and regretful to be confined by strategic considerations of “the humans would fight you really hard, so you should promise not to do it”. Where as humans find it relieving? or something.
Utilitarianism indeed is just a very crude proxy.
Utilitarianism, like many philosophical subjects, is not a finished theory but still undergoing active research. There is significant recent progress on the repugnant conclusion for example. See this EA Forum post by MichaelStJules. He also has other posts on cutting edge Utilitarianism research. I think many people on LW are not aware of this because they, at most, focus on rationality research but not ethics research.