It seems like there’s a fairly simple solution to the problem. Instead of thinking of utilitarianism as the sum of the utility value of all sentient beings, why not think of it in terms of increasing the average amount of utility value of all sentient beings, with the caveat that it is also unethical to end the life of any currently existing sentient being.
There’s no reason that thinking of it as a sum is inherently more rational then thinking of it as an average. Of course, like I said, you have to add the rule that you can’t end the currently existing life of intelligent beings just to increase the average happiness, or else you get even more repugnant conclusions. But with that rule, it seems like you get overall better conclusions then if you think of utility as a sum.
For example, I don’t see why we have any specific ethical mandate to bring new intelligent life into the world, and in fact I would think that it would only be ethically justified if that new intelligent being would have a happiness level at least equal to the average for the world as a whole. (IE: you shouldn’t have kids unless you think you can raise them at least as well as the average human being would.)
In either case, how easy is it to pin down the meaning of ‘end the life’?
If we’ve defined welfare/happiness/personal utility, we could define ‘end of life’ as “no longer generating any welfare, positive or negative, in any possible future”. Or something to that effect, which should be good enough for our purposes.
But then, is not any method which does not prolong a life equivalent to ending it? This then makes basically any plan unethical. If unethical is just a utility cost, like you imply elsewhere, then there’s still the possibility that it’s ethical to kill someone to make others happier (or to replace them with multiple people), and it’s not clear where that extra utility enters the utility function from. If it’s the prohibition of plans entirely, then the least unacceptable plan is the one that sacrifices everything possible to extend lives as long as possible- which seems like a repugnant conclusion of its own.
But then, is not any method which does not prolong a life equivalent to ending it?
Yes—but the distinction between doing something through action or inaction seems a very feeble one in the first place.
If unethical is just a utility cost, like you imply elsewhere, then there’s still the possibility that it’s ethical to kill someone to make others happier
Generally, you don’t want to make any restriction total/deontological (“It’s never good to do this”), or else it dominates everything else in your morality. You’d want to be able to kill someone for a large enough gain—just not to be able to do continually for slight increases in total (or average) happinesses. Killing people who don’t want to die should carry a cost.
why not think of it in terms of increasing the average amount of utility value of all sentient beings, with the caveat that it is also unethical to end the life of any currently existing sentient being.
If a consequential ethic has an obvious hole in it, that usually points to a more general divergence between the ethic and the implicit values that it’s trying to approximate. Applying a deontological patch over the first examples you see won’t fix the underlying flaw, it’ll just force people to exploit it in stranger and more convoluted ways.
For example, if we defined utility as subjective pleasure, we might be tempted to introduce an exception for, say, opiate drugs. But maximizing utility under that constraint just implies wireheading in more subtle ways. You can’t actually fix the problem without addressing other aspects of human values.
Average utilitarianism implies the Sadistic Conclusion: if average welfare is very negative, then this rule calls for creating beings with lives of torture not worth living as long as those lives are even slightly better than the average. This helps no one and harms particular individuals.
Frankly, if the only flaw in that moral theory is that it comes to a weird answer in a world that is already a universally horrible hellscape for all sentient beings, then I don’t see that as a huge problem in it.
In any case, I’m not sure that’s the wrong answer anyway. If every generation is able to improve the lives of the next generation, and keep moving the average utility in a positive direction, then the species is heading in the right direction, and likely would be better off in the long run then if they just committed mass suicide (like additive utilitarian theory might suggest). For that matter, there’s a subjective aspect to utility; a medieval peasant farmer might be quite happy if he is 10% better off then all of his neighbors.
I think you’re on the right track. I believe that a small population with high utility per capita is better than a large one with low utility per capita, even if the total utility is larger in the small population. But I think tying that moral intuition to the average utility of the population might be the wrong way to go about it, if only because it creates problems like the one CarlShuman mentioned.
I think a better route might be to somehow attach a negative number to the addition of more people after a certain point, or something like that. Or you can add a caveat that basically says for the system to act like total utilitarianism while the average is negative, and average when it’s positive.
Btw, in your original post you mention that we’d need a caveat to stop people from killing existing people to raise the average. A simple solution to that would be to continue to count people in the average even after they are dead.
Average utilitarianism implies the Sadistic Conclusion: if average welfare is very negative, then this rule calls for creating beings with lives of torture not worth living as long as those lives are even slightly better than the average.
That’s not the Sadistic Conclusion, that’s something else. I think Michael Huemer called it the “Hell Conclusion.” It is a valid criticism of average utilitarianism, whatever it’s called. Like you, I reject the Hell Conclusion.
The Sadistic Conclusion is the conclusion that, if adding more people with positive welfare to the world is bad, it might be better to do some other bad thing then to add more people with positive welfare. Arrhenius gives as an example, adding one person of negative welfare instead of a huge amount of people with positive welfare. But really it could be anything. You could also harm (or refuse to help) existing people to avoid creating more people.
I accept the Sadistic Conclusion wholeheartedly. I harm myself in all sorts of ways in order to avoid adding more people to the world. For instance, I spend money on condoms instead of on candy, abstain from sex when I don’t have contraceptives, and other such things. Most other people seem to accept the SC as well. I think the only reason it seems counterintuitive is that Arrhenius used a particularly nasty and vivid example of it that invoked our scope insensitivity.
More exactly, Ng’s theory implies the “Sadistic Conclusion” (Arrhenius 2000a,b): For any number of lives with any negative welfare (e.g. tormented lives), there are situations in which it would be better to add these lives rather than some number of lives with positive welfare
There are two different reasons why these population principles state that it might be preferable to add lives of negative welfare. The first, which I referred to as the “Hell Conclusion,” is that a principle that values average welfare might consider it good to add lives with negative welfare in a situation where average welfare is negative, because doing so would up the average. The second, which I referred to as the “Sadistic Conclusion,” states that, if adding lives with positive welfare can sometimes be bad, then adding a smaller amount of lives with negative welfare might sometimes be less bad.
I am pretty sure I have my terminology straight. I am pretty sure that the “Sadistic Conclusion” the page you linked to is referring to is the second reason, not the first. That being said, your original argument is entirely valid. Adding tormented lives to raise the average is bad, regardless of you refer to it as the “Sadistic Conclusion” or the “Hell Conclusion.” I consider it a solid argument against naive and simple formulations of average utilitarianism.
What I refer to as the Sadistic Conclusion differs from the Hell Conclusion in a number of ways, however. Under the Hell Conclusion adding tormented lives is better than adding nobody, providing the tormented lives are slightly less tormented than average. Under the Sadistic Conclusion adding tormented lives is still a very bad thing, it just may be less bad than adding a huge amount of positive lives.
We should definitely reject the Hell Conclusion, but the Sadistic Conclusion seems correct to me. Like I said, people harm themselves all the time in order to avoid having children. All the traditional form of the SC does is concentrate all that harm into one person, instead of spreading it out among a lot of people. It still considers adding negative lives to be a bad thing, just sometimes less bad than adding vast amounts of positive lives.
Are you saying we should maximize the average utility of all humans, or of all sentient beings? The first one is incredibly parochial, but the second one implies that how many children we should have depends on the happiness of aliens on the other side of the universe, which is, at the very least, pretty weird.
Not having an ethical mandate to create new life might or might not be a good idea, but average utilitarianism doesn’t get you there. It just changes the criteria in bizarre ways.
Are you saying we should maximize the average utility of all humans, or of all sentient beings?
I’m not saying anything, at this point. I believe that the best population ethics is likely to be complicated, just as standard ethics are, and I haven’t fully settled on either yet.
It seems like there’s a fairly simple solution to the problem. Instead of thinking of utilitarianism as the sum of the utility value of all sentient beings, why not think of it in terms of increasing the average amount of utility value of all sentient beings, with the caveat that it is also unethical to end the life of any currently existing sentient being.
There’s no reason that thinking of it as a sum is inherently more rational then thinking of it as an average. Of course, like I said, you have to add the rule that you can’t end the currently existing life of intelligent beings just to increase the average happiness, or else you get even more repugnant conclusions. But with that rule, it seems like you get overall better conclusions then if you think of utility as a sum.
For example, I don’t see why we have any specific ethical mandate to bring new intelligent life into the world, and in fact I would think that it would only be ethically justified if that new intelligent being would have a happiness level at least equal to the average for the world as a whole. (IE: you shouldn’t have kids unless you think you can raise them at least as well as the average human being would.)
Is this a deontological rule, or a consequentialist rule? In either case, how easy is it to pin down the meaning of ‘end the life’?
It would need to be consequentialist, of course.
If we’ve defined welfare/happiness/personal utility, we could define ‘end of life’ as “no longer generating any welfare, positive or negative, in any possible future”. Or something to that effect, which should be good enough for our purposes.
But then, is not any method which does not prolong a life equivalent to ending it? This then makes basically any plan unethical. If unethical is just a utility cost, like you imply elsewhere, then there’s still the possibility that it’s ethical to kill someone to make others happier (or to replace them with multiple people), and it’s not clear where that extra utility enters the utility function from. If it’s the prohibition of plans entirely, then the least unacceptable plan is the one that sacrifices everything possible to extend lives as long as possible- which seems like a repugnant conclusion of its own.
Yes—but the distinction between doing something through action or inaction seems a very feeble one in the first place.
Generally, you don’t want to make any restriction total/deontological (“It’s never good to do this”), or else it dominates everything else in your morality. You’d want to be able to kill someone for a large enough gain—just not to be able to do continually for slight increases in total (or average) happinesses. Killing people who don’t want to die should carry a cost.
If a consequential ethic has an obvious hole in it, that usually points to a more general divergence between the ethic and the implicit values that it’s trying to approximate. Applying a deontological patch over the first examples you see won’t fix the underlying flaw, it’ll just force people to exploit it in stranger and more convoluted ways.
For example, if we defined utility as subjective pleasure, we might be tempted to introduce an exception for, say, opiate drugs. But maximizing utility under that constraint just implies wireheading in more subtle ways. You can’t actually fix the problem without addressing other aspects of human values.
I was never intending a deontological patch, merely a utility cost to ending a life.
That’s average utilitarianism, which has its own problems in the literature.
This is a good general caveat to have.
Average utilitarianism implies the Sadistic Conclusion: if average welfare is very negative, then this rule calls for creating beings with lives of torture not worth living as long as those lives are even slightly better than the average. This helps no one and harms particular individuals.
It’s discussed in the SEP article.
Frankly, if the only flaw in that moral theory is that it comes to a weird answer in a world that is already a universally horrible hellscape for all sentient beings, then I don’t see that as a huge problem in it.
In any case, I’m not sure that’s the wrong answer anyway. If every generation is able to improve the lives of the next generation, and keep moving the average utility in a positive direction, then the species is heading in the right direction, and likely would be better off in the long run then if they just committed mass suicide (like additive utilitarian theory might suggest). For that matter, there’s a subjective aspect to utility; a medieval peasant farmer might be quite happy if he is 10% better off then all of his neighbors.
I think you’re on the right track. I believe that a small population with high utility per capita is better than a large one with low utility per capita, even if the total utility is larger in the small population. But I think tying that moral intuition to the average utility of the population might be the wrong way to go about it, if only because it creates problems like the one CarlShuman mentioned.
I think a better route might be to somehow attach a negative number to the addition of more people after a certain point, or something like that. Or you can add a caveat that basically says for the system to act like total utilitarianism while the average is negative, and average when it’s positive.
Btw, in your original post you mention that we’d need a caveat to stop people from killing existing people to raise the average. A simple solution to that would be to continue to count people in the average even after they are dead.
That’s not the Sadistic Conclusion, that’s something else. I think Michael Huemer called it the “Hell Conclusion.” It is a valid criticism of average utilitarianism, whatever it’s called. Like you, I reject the Hell Conclusion.
The Sadistic Conclusion is the conclusion that, if adding more people with positive welfare to the world is bad, it might be better to do some other bad thing then to add more people with positive welfare. Arrhenius gives as an example, adding one person of negative welfare instead of a huge amount of people with positive welfare. But really it could be anything. You could also harm (or refuse to help) existing people to avoid creating more people.
I accept the Sadistic Conclusion wholeheartedly. I harm myself in all sorts of ways in order to avoid adding more people to the world. For instance, I spend money on condoms instead of on candy, abstain from sex when I don’t have contraceptives, and other such things. Most other people seem to accept the SC as well. I think the only reason it seems counterintuitive is that Arrhenius used a particularly nasty and vivid example of it that invoked our scope insensitivity.
SEP:
There are two different reasons why these population principles state that it might be preferable to add lives of negative welfare. The first, which I referred to as the “Hell Conclusion,” is that a principle that values average welfare might consider it good to add lives with negative welfare in a situation where average welfare is negative, because doing so would up the average. The second, which I referred to as the “Sadistic Conclusion,” states that, if adding lives with positive welfare can sometimes be bad, then adding a smaller amount of lives with negative welfare might sometimes be less bad.
I am pretty sure I have my terminology straight. I am pretty sure that the “Sadistic Conclusion” the page you linked to is referring to is the second reason, not the first. That being said, your original argument is entirely valid. Adding tormented lives to raise the average is bad, regardless of you refer to it as the “Sadistic Conclusion” or the “Hell Conclusion.” I consider it a solid argument against naive and simple formulations of average utilitarianism.
What I refer to as the Sadistic Conclusion differs from the Hell Conclusion in a number of ways, however. Under the Hell Conclusion adding tormented lives is better than adding nobody, providing the tormented lives are slightly less tormented than average. Under the Sadistic Conclusion adding tormented lives is still a very bad thing, it just may be less bad than adding a huge amount of positive lives.
We should definitely reject the Hell Conclusion, but the Sadistic Conclusion seems correct to me. Like I said, people harm themselves all the time in order to avoid having children. All the traditional form of the SC does is concentrate all that harm into one person, instead of spreading it out among a lot of people. It still considers adding negative lives to be a bad thing, just sometimes less bad than adding vast amounts of positive lives.
Are you saying we should maximize the average utility of all humans, or of all sentient beings? The first one is incredibly parochial, but the second one implies that how many children we should have depends on the happiness of aliens on the other side of the universe, which is, at the very least, pretty weird.
Not having an ethical mandate to create new life might or might not be a good idea, but average utilitarianism doesn’t get you there. It just changes the criteria in bizarre ways.
I’m not saying anything, at this point. I believe that the best population ethics is likely to be complicated, just as standard ethics are, and I haven’t fully settled on either yet.