Not even close. The primary content of the OP is based on a straw man due to a massive misunderstanding of the mathematical arguments about the Repugnant Conclusion.
The conclusion of what Partfit actually demonstrated goes something more like this:
For any coherent mathematical definition of utility such that there is some additive functions which allows you to sum the utility of many people to determine U(population), the following paradox exists:
Given any world with positive utility A, there exists at least one other world B with more people, and less average utiity per person which your utility system will judge to be better, i.e.: U(B) > U(A).
Parfit does not conclude that you necessarily reach world B by maximizing reproduction from world A nor that every world with more people and less average utility is better. Only worlds with a higher total utility are considered “better”. This of course implies either more resources, or more utility efficient use of resources in the “better” world.
The cable channel analogy would be to say “As long as every extra cable channel I add provides at least some constant positive utility epsilon>0, even if it is vanishingly small, there is some number of cable channels I can put into your feed that will make it worth $100 to you.” Is this really so hard to accept? It seems obviously true even if irrelevant to real life where most of us would have diminishing marginal utility of cable channels.
Parfit’s point is that it is hard for the human brain to accept the possibility that some world with uncounted numbers of people with lives just barely worth living could possibly be better than any world with a bunch of very happy high utility people (he can’t accept it himself), even though any algebraically coherent system of utility will lead to that very conclusion.
John Maxwell’s comment gets to the heart of the issue, the term “just barely worth living”. Philosophy always struggles where math meets natural language, and this is a classic example.
The phrase “just barely worth living” conjures up an image of a life that is barely better than the kind of neverending torture/loneliness scenario where we might consider encouraging suicide.
But the taboos against suicide are strong. Even putting aside taboos, there are large amounts of collateral damage from suicides. The most obvious is that anyone who has emotional or family connections to a suicide will suffer. Even people who are very isolated, will have some connection, and suicide could trigger grief or depression in any people who encounter them or their story. There are also some very scary studies about suicide and accident rates going up in the aftermath of publicized suicides or accidents, due to social lemming like programming in humans.
So it is quite rational for most people to not consider suicide until their personal utility is highly negative if they care at all about the people or world around them. For most of us, a life just above the suicide threshold would be a negative utility life and a fairly large negative utility.
A life with utility positive epsilon is not a life of sadness or pain, but a life that we would just barely choose to live, as a disembodied soul given a choice of life X or non-existence. Such a life, IMO will be comfortably clear of the suicide threshold, and would, in my opinion, represent an improvement in the world. Why wouldn’t it? It is by definition, a life that someone would choose to have rather than not have! How could that not improve the world?
Given this interpretation of “just barely worth living”, I accept the so-called Repugnant conclusion, and go happily on my way calculating utility functions.
RC is just the mirror image of the tortured person versus 3^^^^3 persons with dust specks in their eyes debate.
Tabooing “life just barely worth living”, and then shutting up and multiplying led me to realize that the so-called Repugnant conclusion wasn’t repugnant after all.
The primary content of the OP is based on a straw man due to a massive misunderstanding of the mathematical arguments about the Repugnant Conclusion.
Even if that is the case I think that that strawman is commonly accepted enough that it needs to be taken down.
Given any world with positive utility A, there exists at least one other world B with more people, and less average utiity per person which your utility system will judge to be better, i.e.: U(B) > U(A).
I believe that creating a life worth living and enhancing the lives of existing people to both be contributory values that form Overall Value. Furthermore, these values have diminishing returns relative to each other, so in a world with low population creating new people is more valuable, but in a world with a high population improving the lives of existing people is of more value.
Then I shut up and multiply and get the conclusion that the optimal society is one that has a moderately sized population and a high average quality of life. For every world with a large population leading lives barely worth living there exists another, better world with a lower population and higher quality of life.
Now, there may be some “barely worth living” societies so huge that their contribution to overall value is larger than a much smaller society with a higher standard of living, even considering diminishing returns. However, that “barely worth living” society would in turn be much worse than a society with a somewhat smaller population and a higher standard of living. For instance, a planet full of lives barely worth living might be better than an island full of very high quality lives. However, it would be much worse than a planet with a somewhat smaller population, but a higher quality of life.
Parfit does not conclude that you necessarily reach world B by maximizing reproduction from world A nor that every world with more people and less average utility is better. Only worlds with a higher total utility are considered “better”.
I’m not interesting in maximizing total utility. I’m interested in maximizing overall value, of which total utility is only one part.
A life with utility positive epsilon is not a life of sadness or pain, but a life that we would just barely choose to live, as a disembodied soul given a choice of life X or non-existence. Such a life, IMO will be comfortably clear of the suicide threshold, and would, in my opinion, represent an improvement in the world.
To me it would, in many cases, be morally better to use the resources that would be used to create a “life that someone would choose to have” to instead improve the lives of existing people so that they are above that threshold. That would contribute more to overall value, and therefore make an even bigger improvement in the world.
Why wouldn’t it? It is by definition, a life that someone would choose to have rather than not have! How could that not improve the world?
It’s not that it wouldn’t improve the world. It’s that it would improve the world less than enhancing the utility of the people who already exist instead. You can criticize someone who is doing good if they are passing up opportunities to do even more good.
RC is just the mirror image of the tortured person versus 3^^^^3 persons with dust specks in their eyes debate.
Not really. In “torture vs specks” your choice will have the same effect on total and average utility (they either both go down a little or both go down a lot). In the RC your choice will affect them differently (one goes up and the other goes down). Since total and average utility (or more precisely, creating new lives worth living and enhancing existing lives) are both contribute to overall value if you shut up and multiply you’ll conclude that the best way to maximize overall value is to increase both of them, not maximize one at the expense of the other.
What is this Overall Value that you speak of, and why do the parts that you add matter? It seems to me that you’re just making something up to rationalize your preconceptions.
Overall Value is what one gets when one adds up various values, like average utility, number of worthwhile lives, equality, etc. These values are not always 100% compatible with each other, often a compromise needs to be found between them. They also probably have diminishing returns relative to each other.
When people try to develop moral theories they often reach insane-seeming normative conclusions. One possible reason for this is that they have made genuine moral progress which only seems insane because we are unused to it. But another possible (and probably more frequent) reason is that they have an incomplete theory that fails to take something of value into account.
The classic example of this is the early development of utilitarianism. Early utilitarian theories that maximized pleasure sort of suggested the insane conclusion that the ideal society would be one full of people who are tended by robots while blissed out on heroin. It turned out the reason it drew this insane conclusion was that it didn’t distinguish between types of pleasure, or consider that there were other values than pleasure. Eventually preference utilitarianism came along and proved far superior because it could take more values into account. I don’t think it’s perfected yet, but it’s a step in the right direction.
I think that there are likely multiple values in aggregating utility, and that the reason the Repugnant Conclusion is repugnant is that it fails to take some of these values into account. For instance, total number of worthwhile lives, and high average utility are likely both of value. A world with higher average utility may be morally better than one with lower average utility and a larger population, even if it has lower total aggregate utility.
Related to this, I also suspect that the reason that it seems wrong to sacrifice people to a utility monster, even though that would increase total aggregate utility, is that equality is a terminal value, not a byproduct of diminishing marginal returns in utility. A world where a utility monster shares with people may be a morally better world, even if it has lower total aggregate utility.
I think that moral theories that just try to maximize total aggregate utility are actually oversimplifications of much more complex values. Accepting these theories, instead of trying to find what they missed, is Hollywood Rationality. For every moral advancement there are a thousand errors. The major challenge of ethics is determining when a new moral conclusion is genuine moral progress and when it is a mistake.
Not even close. The primary content of the OP is based on a straw man due to a massive misunderstanding of the mathematical arguments about the Repugnant Conclusion.
The conclusion of what Partfit actually demonstrated goes something more like this:
For any coherent mathematical definition of utility such that there is some additive functions which allows you to sum the utility of many people to determine U(population), the following paradox exists:
Given any world with positive utility A, there exists at least one other world B with more people, and less average utiity per person which your utility system will judge to be better, i.e.: U(B) > U(A).
Parfit does not conclude that you necessarily reach world B by maximizing reproduction from world A nor that every world with more people and less average utility is better. Only worlds with a higher total utility are considered “better”. This of course implies either more resources, or more utility efficient use of resources in the “better” world.
The cable channel analogy would be to say “As long as every extra cable channel I add provides at least some constant positive utility epsilon>0, even if it is vanishingly small, there is some number of cable channels I can put into your feed that will make it worth $100 to you.” Is this really so hard to accept? It seems obviously true even if irrelevant to real life where most of us would have diminishing marginal utility of cable channels.
Parfit’s point is that it is hard for the human brain to accept the possibility that some world with uncounted numbers of people with lives just barely worth living could possibly be better than any world with a bunch of very happy high utility people (he can’t accept it himself), even though any algebraically coherent system of utility will lead to that very conclusion.
John Maxwell’s comment gets to the heart of the issue, the term “just barely worth living”. Philosophy always struggles where math meets natural language, and this is a classic example.
The phrase “just barely worth living” conjures up an image of a life that is barely better than the kind of neverending torture/loneliness scenario where we might consider encouraging suicide.
But the taboos against suicide are strong. Even putting aside taboos, there are large amounts of collateral damage from suicides. The most obvious is that anyone who has emotional or family connections to a suicide will suffer. Even people who are very isolated, will have some connection, and suicide could trigger grief or depression in any people who encounter them or their story. There are also some very scary studies about suicide and accident rates going up in the aftermath of publicized suicides or accidents, due to social lemming like programming in humans.
So it is quite rational for most people to not consider suicide until their personal utility is highly negative if they care at all about the people or world around them. For most of us, a life just above the suicide threshold would be a negative utility life and a fairly large negative utility.
A life with utility positive epsilon is not a life of sadness or pain, but a life that we would just barely choose to live, as a disembodied soul given a choice of life X or non-existence. Such a life, IMO will be comfortably clear of the suicide threshold, and would, in my opinion, represent an improvement in the world. Why wouldn’t it? It is by definition, a life that someone would choose to have rather than not have! How could that not improve the world?
Given this interpretation of “just barely worth living”, I accept the so-called Repugnant conclusion, and go happily on my way calculating utility functions.
RC is just the mirror image of the tortured person versus 3^^^^3 persons with dust specks in their eyes debate.
Tabooing “life just barely worth living”, and then shutting up and multiplying led me to realize that the so-called Repugnant conclusion wasn’t repugnant after all.
Even if that is the case I think that that strawman is commonly accepted enough that it needs to be taken down.
I believe that creating a life worth living and enhancing the lives of existing people to both be contributory values that form Overall Value. Furthermore, these values have diminishing returns relative to each other, so in a world with low population creating new people is more valuable, but in a world with a high population improving the lives of existing people is of more value.
Then I shut up and multiply and get the conclusion that the optimal society is one that has a moderately sized population and a high average quality of life. For every world with a large population leading lives barely worth living there exists another, better world with a lower population and higher quality of life.
Now, there may be some “barely worth living” societies so huge that their contribution to overall value is larger than a much smaller society with a higher standard of living, even considering diminishing returns. However, that “barely worth living” society would in turn be much worse than a society with a somewhat smaller population and a higher standard of living. For instance, a planet full of lives barely worth living might be better than an island full of very high quality lives. However, it would be much worse than a planet with a somewhat smaller population, but a higher quality of life.
I’m not interesting in maximizing total utility. I’m interested in maximizing overall value, of which total utility is only one part.
To me it would, in many cases, be morally better to use the resources that would be used to create a “life that someone would choose to have” to instead improve the lives of existing people so that they are above that threshold. That would contribute more to overall value, and therefore make an even bigger improvement in the world.
It’s not that it wouldn’t improve the world. It’s that it would improve the world less than enhancing the utility of the people who already exist instead. You can criticize someone who is doing good if they are passing up opportunities to do even more good.
Not really. In “torture vs specks” your choice will have the same effect on total and average utility (they either both go down a little or both go down a lot). In the RC your choice will affect them differently (one goes up and the other goes down). Since total and average utility (or more precisely, creating new lives worth living and enhancing existing lives) are both contribute to overall value if you shut up and multiply you’ll conclude that the best way to maximize overall value is to increase both of them, not maximize one at the expense of the other.
What is this Overall Value that you speak of, and why do the parts that you add matter? It seems to me that you’re just making something up to rationalize your preconceptions.
Overall Value is what one gets when one adds up various values, like average utility, number of worthwhile lives, equality, etc. These values are not always 100% compatible with each other, often a compromise needs to be found between them. They also probably have diminishing returns relative to each other.
When people try to develop moral theories they often reach insane-seeming normative conclusions. One possible reason for this is that they have made genuine moral progress which only seems insane because we are unused to it. But another possible (and probably more frequent) reason is that they have an incomplete theory that fails to take something of value into account.
The classic example of this is the early development of utilitarianism. Early utilitarian theories that maximized pleasure sort of suggested the insane conclusion that the ideal society would be one full of people who are tended by robots while blissed out on heroin. It turned out the reason it drew this insane conclusion was that it didn’t distinguish between types of pleasure, or consider that there were other values than pleasure. Eventually preference utilitarianism came along and proved far superior because it could take more values into account. I don’t think it’s perfected yet, but it’s a step in the right direction.
I think that there are likely multiple values in aggregating utility, and that the reason the Repugnant Conclusion is repugnant is that it fails to take some of these values into account. For instance, total number of worthwhile lives, and high average utility are likely both of value. A world with higher average utility may be morally better than one with lower average utility and a larger population, even if it has lower total aggregate utility.
Related to this, I also suspect that the reason that it seems wrong to sacrifice people to a utility monster, even though that would increase total aggregate utility, is that equality is a terminal value, not a byproduct of diminishing marginal returns in utility. A world where a utility monster shares with people may be a morally better world, even if it has lower total aggregate utility.
I think that moral theories that just try to maximize total aggregate utility are actually oversimplifications of much more complex values. Accepting these theories, instead of trying to find what they missed, is Hollywood Rationality. For every moral advancement there are a thousand errors. The major challenge of ethics is determining when a new moral conclusion is genuine moral progress and when it is a mistake.