I reject utilitarianism, so the repugnant conclusion doesn’t apply to my ethics. But one can accept a form of utilitarianism that rejects the repugnant conclusion, for example, average preference utilitarianism.
I just reject utilitarianism on the grounds that you cannot actually compare or aggregate utility between two agents (their utilities being not actually comparable on the same axis, or alternately being in ‘different units’), and on the grounds that human behavior does not satisfy the logical axioms required for us to be said to have a utility function.
Well you can make such comparisons if you allow for empathic preferences (imagine placing yourself in someone else’s position, and ask how good or bad that would be, relative to some other position). Also the fact that human behavior doesn’t perfectly fit a utility function is not in itself a huge issue: just apply a best fit function (this is the “revealed preference” approach to utility).
Ken Binmore has a rather good paper on this topic, see here.
I have seen it suggested that what people think of when they hear “a life barely worth living” is a live on the edge of suicide, which is well past not worth living. As such, it’s not surprising that you wouldn’t want a world full of people like that.
If the lives are worth living, then it seems to me intuitively obvious that sufficiently many can be arbitrarily valuable.
How do people respond to the repugnant conclusion?
By denying that having barely enough resources to live implies that life need be barely worth living. See Poor Folks do Smile for details.
How is that a response to it? That’s just a reason why the choice might never come up.
I reject utilitarianism, so the repugnant conclusion doesn’t apply to my ethics. But one can accept a form of utilitarianism that rejects the repugnant conclusion, for example, average preference utilitarianism.
I just reject utilitarianism on the grounds that you cannot actually compare or aggregate utility between two agents (their utilities being not actually comparable on the same axis, or alternately being in ‘different units’), and on the grounds that human behavior does not satisfy the logical axioms required for us to be said to have a utility function.
Well you can make such comparisons if you allow for empathic preferences (imagine placing yourself in someone else’s position, and ask how good or bad that would be, relative to some other position). Also the fact that human behavior doesn’t perfectly fit a utility function is not in itself a huge issue: just apply a best fit function (this is the “revealed preference” approach to utility).
Ken Binmore has a rather good paper on this topic, see here.
I have seen it suggested that what people think of when they hear “a life barely worth living” is a live on the edge of suicide, which is well past not worth living. As such, it’s not surprising that you wouldn’t want a world full of people like that.
If the lives are worth living, then it seems to me intuitively obvious that sufficiently many can be arbitrarily valuable.