My own analysis of the Repugnant Conclusion is that its apparent force comes from equivocating between senses of barely worth living. In order to voluntarily create a new person, what we need is a life that is worth celebrating or worth birthing, one that contains more good than ill and more happiness than sorrow—otherwise we should reject the step where we choose to birth that person. Once someone is alive, on the other hand, we’re obliged to take care of them in a way that we wouldn’t be obliged to create them in the first place—and they may choose not to commit suicide, even if their life contains more sorrow than happiness. If we would be saddened to hear the news that such a person existed, we shouldn’t kill them, but we should not voluntarily create such a person in an otherwise happy world. So each time we voluntarily add another person to Parfit’s world, we have a little celebration and say with honest joy “Whoopee!”, not, “Damn, now it’s too late to uncreate them.”
And then the rest of the Repugnant Conclusion—that it’s better to have a million lives very worth celebrating, than a billion lives slightly worth celebrating—is just “repugnant” because of standard scope insensitivity. The brain fails to multiply a billion small birth celebrations to end up with a larger total celebration of life than a million big celebrations. Alternatively, average utilitarians—I suspect I am one—may just reject the very first step, in which the average quality of life goes down.
This tends to imply the Sadistic Conclusion: that it is better to create some lives that aren’t worth living than it is to create a large number of lives that are barely worth living.
Average utilitarianism also tends to choke horribly under other circumstances. Consider a population whose average welfare is negative. If you then add a bunch of people whose welfare was slightly less negative than the average, you improve average welfare, but you’ve still just created a bunch of people who would prefer not to have existed. That can’t be good.
There are several “impossibility” theorems that show it is impossible to come up with a way to order populations that satisfies all of a group of intuitively appealing conditions.
Gustaf Arrhenius is the main person to look at on this topic. His website is here. Check out ch. 10-11 of his dissertation Future Generations: A Challenge for Moral Theory (though he has a forthcoming book that will make that obsolete). You may find more papers on his website. Look at the papers that contain the words “impossibility theorem” in the title.
Do average utilitarians have a standard answer to the question of what is the average welfare of zero people? The theory seems consistent with any such answer. If you’re maximizing the average welfare of the people alive at some future point in time, and there’s a nonzero chance of causing or preventing extinction, then the answer matters, too.
Usually, average utilitarians are interested in maximizing the average well-being of all the people that ever exist, they are not fundamentally interested in the average well-being of the people alive at particular points of time. Since some people have already existed, this is only a technical problem for average utilitarianism (and a problem that could not even possibly affect anyone’s decision).
Incidentally, not distinguishing between averages over all the people that ever exist and all the people that exist at some time leads some people to wrongly conclude that average utilitarianism favors killing off people who are happy, but less happy than average.
A related commonly missed distinction is between maximizing welfare divided by lives, versus maximizing welfare divided by life-years. The second is more prone to endorsing euthanasia hypotheticals.
This tends to imply the Sadistic Conclusion: that it is better to create some lives that aren’t worth living than it is to create a large number of lives that are barely worth living.
I think that the Sadistic Conclusion is correct. I argue here that it is far more in line with typical human moral intuitions than the repugnant one.
There are several “impossibility” theorems that show it is impossible to come up with a way to order populations that satisfies all of a group of intuitively appealing conditions.
If you take the underlying principle of the Sadistic Conclusion, but change the concrete example to something smaller scale and less melodramatic than “Create lives not worth living to stop the addition of lives barely worth living,” you will find that it is very intuitively appealing.
For instance, if you ask people if they should practice responsible family planning or spend money combating overpopulation they agree. But (if we assume that the time and money spent on these efforts could have been devoted to something more fun) this is the same principle. The only difference is that instead creating a new life not worth living we are instead subtracting an equivalent amount of utility from existing people.
This tends to imply the Sadistic Conclusion: that it is better to create some lives that aren’t worth living than it is to create a large number of lives that are barely worth living.
Average utilitarianism also tends to choke horribly under other circumstances. Consider a population whose average welfare is negative. If you then add a bunch of people whose welfare was slightly less negative than the average, you improve average welfare, but you’ve still just created a bunch of people who would prefer not to have existed. That can’t be good.
There are several “impossibility” theorems that show it is impossible to come up with a way to order populations that satisfies all of a group of intuitively appealing conditions.
Where can I find the theorems?
Gustaf Arrhenius is the main person to look at on this topic. His website is here. Check out ch. 10-11 of his dissertation Future Generations: A Challenge for Moral Theory (though he has a forthcoming book that will make that obsolete). You may find more papers on his website. Look at the papers that contain the words “impossibility theorem” in the title.
Do average utilitarians have a standard answer to the question of what is the average welfare of zero people? The theory seems consistent with any such answer. If you’re maximizing the average welfare of the people alive at some future point in time, and there’s a nonzero chance of causing or preventing extinction, then the answer matters, too.
Usually, average utilitarians are interested in maximizing the average well-being of all the people that ever exist, they are not fundamentally interested in the average well-being of the people alive at particular points of time. Since some people have already existed, this is only a technical problem for average utilitarianism (and a problem that could not even possibly affect anyone’s decision).
Incidentally, not distinguishing between averages over all the people that ever exist and all the people that exist at some time leads some people to wrongly conclude that average utilitarianism favors killing off people who are happy, but less happy than average.
A related commonly missed distinction is between maximizing welfare divided by lives, versus maximizing welfare divided by life-years. The second is more prone to endorsing euthanasia hypotheticals.
I think that the Sadistic Conclusion is correct. I argue here that it is far more in line with typical human moral intuitions than the repugnant one.
If you take the underlying principle of the Sadistic Conclusion, but change the concrete example to something smaller scale and less melodramatic than “Create lives not worth living to stop the addition of lives barely worth living,” you will find that it is very intuitively appealing.
For instance, if you ask people if they should practice responsible family planning or spend money combating overpopulation they agree. But (if we assume that the time and money spent on these efforts could have been devoted to something more fun) this is the same principle. The only difference is that instead creating a new life not worth living we are instead subtracting an equivalent amount of utility from existing people.