What is this Overall Value that you speak of, and why do the parts that you add matter? It seems to me that you’re just making something up to rationalize your preconceptions.
Overall Value is what one gets when one adds up various values, like average utility, number of worthwhile lives, equality, etc. These values are not always 100% compatible with each other, often a compromise needs to be found between them. They also probably have diminishing returns relative to each other.
When people try to develop moral theories they often reach insane-seeming normative conclusions. One possible reason for this is that they have made genuine moral progress which only seems insane because we are unused to it. But another possible (and probably more frequent) reason is that they have an incomplete theory that fails to take something of value into account.
The classic example of this is the early development of utilitarianism. Early utilitarian theories that maximized pleasure sort of suggested the insane conclusion that the ideal society would be one full of people who are tended by robots while blissed out on heroin. It turned out the reason it drew this insane conclusion was that it didn’t distinguish between types of pleasure, or consider that there were other values than pleasure. Eventually preference utilitarianism came along and proved far superior because it could take more values into account. I don’t think it’s perfected yet, but it’s a step in the right direction.
I think that there are likely multiple values in aggregating utility, and that the reason the Repugnant Conclusion is repugnant is that it fails to take some of these values into account. For instance, total number of worthwhile lives, and high average utility are likely both of value. A world with higher average utility may be morally better than one with lower average utility and a larger population, even if it has lower total aggregate utility.
Related to this, I also suspect that the reason that it seems wrong to sacrifice people to a utility monster, even though that would increase total aggregate utility, is that equality is a terminal value, not a byproduct of diminishing marginal returns in utility. A world where a utility monster shares with people may be a morally better world, even if it has lower total aggregate utility.
I think that moral theories that just try to maximize total aggregate utility are actually oversimplifications of much more complex values. Accepting these theories, instead of trying to find what they missed, is Hollywood Rationality. For every moral advancement there are a thousand errors. The major challenge of ethics is determining when a new moral conclusion is genuine moral progress and when it is a mistake.
What is this Overall Value that you speak of, and why do the parts that you add matter? It seems to me that you’re just making something up to rationalize your preconceptions.
Overall Value is what one gets when one adds up various values, like average utility, number of worthwhile lives, equality, etc. These values are not always 100% compatible with each other, often a compromise needs to be found between them. They also probably have diminishing returns relative to each other.
When people try to develop moral theories they often reach insane-seeming normative conclusions. One possible reason for this is that they have made genuine moral progress which only seems insane because we are unused to it. But another possible (and probably more frequent) reason is that they have an incomplete theory that fails to take something of value into account.
The classic example of this is the early development of utilitarianism. Early utilitarian theories that maximized pleasure sort of suggested the insane conclusion that the ideal society would be one full of people who are tended by robots while blissed out on heroin. It turned out the reason it drew this insane conclusion was that it didn’t distinguish between types of pleasure, or consider that there were other values than pleasure. Eventually preference utilitarianism came along and proved far superior because it could take more values into account. I don’t think it’s perfected yet, but it’s a step in the right direction.
I think that there are likely multiple values in aggregating utility, and that the reason the Repugnant Conclusion is repugnant is that it fails to take some of these values into account. For instance, total number of worthwhile lives, and high average utility are likely both of value. A world with higher average utility may be morally better than one with lower average utility and a larger population, even if it has lower total aggregate utility.
Related to this, I also suspect that the reason that it seems wrong to sacrifice people to a utility monster, even though that would increase total aggregate utility, is that equality is a terminal value, not a byproduct of diminishing marginal returns in utility. A world where a utility monster shares with people may be a morally better world, even if it has lower total aggregate utility.
I think that moral theories that just try to maximize total aggregate utility are actually oversimplifications of much more complex values. Accepting these theories, instead of trying to find what they missed, is Hollywood Rationality. For every moral advancement there are a thousand errors. The major challenge of ethics is determining when a new moral conclusion is genuine moral progress and when it is a mistake.