The primary content of the OP is based on a straw man due to a massive misunderstanding of the mathematical arguments about the Repugnant Conclusion.
Even if that is the case I think that that strawman is commonly accepted enough that it needs to be taken down.
Given any world with positive utility A, there exists at least one other world B with more people, and less average utiity per person which your utility system will judge to be better, i.e.: U(B) > U(A).
I believe that creating a life worth living and enhancing the lives of existing people to both be contributory values that form Overall Value. Furthermore, these values have diminishing returns relative to each other, so in a world with low population creating new people is more valuable, but in a world with a high population improving the lives of existing people is of more value.
Then I shut up and multiply and get the conclusion that the optimal society is one that has a moderately sized population and a high average quality of life. For every world with a large population leading lives barely worth living there exists another, better world with a lower population and higher quality of life.
Now, there may be some “barely worth living” societies so huge that their contribution to overall value is larger than a much smaller society with a higher standard of living, even considering diminishing returns. However, that “barely worth living” society would in turn be much worse than a society with a somewhat smaller population and a higher standard of living. For instance, a planet full of lives barely worth living might be better than an island full of very high quality lives. However, it would be much worse than a planet with a somewhat smaller population, but a higher quality of life.
Parfit does not conclude that you necessarily reach world B by maximizing reproduction from world A nor that every world with more people and less average utility is better. Only worlds with a higher total utility are considered “better”.
I’m not interesting in maximizing total utility. I’m interested in maximizing overall value, of which total utility is only one part.
A life with utility positive epsilon is not a life of sadness or pain, but a life that we would just barely choose to live, as a disembodied soul given a choice of life X or non-existence. Such a life, IMO will be comfortably clear of the suicide threshold, and would, in my opinion, represent an improvement in the world.
To me it would, in many cases, be morally better to use the resources that would be used to create a “life that someone would choose to have” to instead improve the lives of existing people so that they are above that threshold. That would contribute more to overall value, and therefore make an even bigger improvement in the world.
Why wouldn’t it? It is by definition, a life that someone would choose to have rather than not have! How could that not improve the world?
It’s not that it wouldn’t improve the world. It’s that it would improve the world less than enhancing the utility of the people who already exist instead. You can criticize someone who is doing good if they are passing up opportunities to do even more good.
RC is just the mirror image of the tortured person versus 3^^^^3 persons with dust specks in their eyes debate.
Not really. In “torture vs specks” your choice will have the same effect on total and average utility (they either both go down a little or both go down a lot). In the RC your choice will affect them differently (one goes up and the other goes down). Since total and average utility (or more precisely, creating new lives worth living and enhancing existing lives) are both contribute to overall value if you shut up and multiply you’ll conclude that the best way to maximize overall value is to increase both of them, not maximize one at the expense of the other.
What is this Overall Value that you speak of, and why do the parts that you add matter? It seems to me that you’re just making something up to rationalize your preconceptions.
Overall Value is what one gets when one adds up various values, like average utility, number of worthwhile lives, equality, etc. These values are not always 100% compatible with each other, often a compromise needs to be found between them. They also probably have diminishing returns relative to each other.
When people try to develop moral theories they often reach insane-seeming normative conclusions. One possible reason for this is that they have made genuine moral progress which only seems insane because we are unused to it. But another possible (and probably more frequent) reason is that they have an incomplete theory that fails to take something of value into account.
The classic example of this is the early development of utilitarianism. Early utilitarian theories that maximized pleasure sort of suggested the insane conclusion that the ideal society would be one full of people who are tended by robots while blissed out on heroin. It turned out the reason it drew this insane conclusion was that it didn’t distinguish between types of pleasure, or consider that there were other values than pleasure. Eventually preference utilitarianism came along and proved far superior because it could take more values into account. I don’t think it’s perfected yet, but it’s a step in the right direction.
I think that there are likely multiple values in aggregating utility, and that the reason the Repugnant Conclusion is repugnant is that it fails to take some of these values into account. For instance, total number of worthwhile lives, and high average utility are likely both of value. A world with higher average utility may be morally better than one with lower average utility and a larger population, even if it has lower total aggregate utility.
Related to this, I also suspect that the reason that it seems wrong to sacrifice people to a utility monster, even though that would increase total aggregate utility, is that equality is a terminal value, not a byproduct of diminishing marginal returns in utility. A world where a utility monster shares with people may be a morally better world, even if it has lower total aggregate utility.
I think that moral theories that just try to maximize total aggregate utility are actually oversimplifications of much more complex values. Accepting these theories, instead of trying to find what they missed, is Hollywood Rationality. For every moral advancement there are a thousand errors. The major challenge of ethics is determining when a new moral conclusion is genuine moral progress and when it is a mistake.
Even if that is the case I think that that strawman is commonly accepted enough that it needs to be taken down.
I believe that creating a life worth living and enhancing the lives of existing people to both be contributory values that form Overall Value. Furthermore, these values have diminishing returns relative to each other, so in a world with low population creating new people is more valuable, but in a world with a high population improving the lives of existing people is of more value.
Then I shut up and multiply and get the conclusion that the optimal society is one that has a moderately sized population and a high average quality of life. For every world with a large population leading lives barely worth living there exists another, better world with a lower population and higher quality of life.
Now, there may be some “barely worth living” societies so huge that their contribution to overall value is larger than a much smaller society with a higher standard of living, even considering diminishing returns. However, that “barely worth living” society would in turn be much worse than a society with a somewhat smaller population and a higher standard of living. For instance, a planet full of lives barely worth living might be better than an island full of very high quality lives. However, it would be much worse than a planet with a somewhat smaller population, but a higher quality of life.
I’m not interesting in maximizing total utility. I’m interested in maximizing overall value, of which total utility is only one part.
To me it would, in many cases, be morally better to use the resources that would be used to create a “life that someone would choose to have” to instead improve the lives of existing people so that they are above that threshold. That would contribute more to overall value, and therefore make an even bigger improvement in the world.
It’s not that it wouldn’t improve the world. It’s that it would improve the world less than enhancing the utility of the people who already exist instead. You can criticize someone who is doing good if they are passing up opportunities to do even more good.
Not really. In “torture vs specks” your choice will have the same effect on total and average utility (they either both go down a little or both go down a lot). In the RC your choice will affect them differently (one goes up and the other goes down). Since total and average utility (or more precisely, creating new lives worth living and enhancing existing lives) are both contribute to overall value if you shut up and multiply you’ll conclude that the best way to maximize overall value is to increase both of them, not maximize one at the expense of the other.
What is this Overall Value that you speak of, and why do the parts that you add matter? It seems to me that you’re just making something up to rationalize your preconceptions.
Overall Value is what one gets when one adds up various values, like average utility, number of worthwhile lives, equality, etc. These values are not always 100% compatible with each other, often a compromise needs to be found between them. They also probably have diminishing returns relative to each other.
When people try to develop moral theories they often reach insane-seeming normative conclusions. One possible reason for this is that they have made genuine moral progress which only seems insane because we are unused to it. But another possible (and probably more frequent) reason is that they have an incomplete theory that fails to take something of value into account.
The classic example of this is the early development of utilitarianism. Early utilitarian theories that maximized pleasure sort of suggested the insane conclusion that the ideal society would be one full of people who are tended by robots while blissed out on heroin. It turned out the reason it drew this insane conclusion was that it didn’t distinguish between types of pleasure, or consider that there were other values than pleasure. Eventually preference utilitarianism came along and proved far superior because it could take more values into account. I don’t think it’s perfected yet, but it’s a step in the right direction.
I think that there are likely multiple values in aggregating utility, and that the reason the Repugnant Conclusion is repugnant is that it fails to take some of these values into account. For instance, total number of worthwhile lives, and high average utility are likely both of value. A world with higher average utility may be morally better than one with lower average utility and a larger population, even if it has lower total aggregate utility.
Related to this, I also suspect that the reason that it seems wrong to sacrifice people to a utility monster, even though that would increase total aggregate utility, is that equality is a terminal value, not a byproduct of diminishing marginal returns in utility. A world where a utility monster shares with people may be a morally better world, even if it has lower total aggregate utility.
I think that moral theories that just try to maximize total aggregate utility are actually oversimplifications of much more complex values. Accepting these theories, instead of trying to find what they missed, is Hollywood Rationality. For every moral advancement there are a thousand errors. The major challenge of ethics is determining when a new moral conclusion is genuine moral progress and when it is a mistake.