I reject utilitarianism, so the repugnant conclusion doesn’t apply to my ethics. But one can accept a form of utilitarianism that rejects the repugnant conclusion, for example, average preference utilitarianism.
I just reject utilitarianism on the grounds that you cannot actually compare or aggregate utility between two agents (their utilities being not actually comparable on the same axis, or alternately being in ‘different units’), and on the grounds that human behavior does not satisfy the logical axioms required for us to be said to have a utility function.
Well you can make such comparisons if you allow for empathic preferences (imagine placing yourself in someone else’s position, and ask how good or bad that would be, relative to some other position). Also the fact that human behavior doesn’t perfectly fit a utility function is not in itself a huge issue: just apply a best fit function (this is the “revealed preference” approach to utility).
Ken Binmore has a rather good paper on this topic, see here.
I reject utilitarianism, so the repugnant conclusion doesn’t apply to my ethics. But one can accept a form of utilitarianism that rejects the repugnant conclusion, for example, average preference utilitarianism.
I just reject utilitarianism on the grounds that you cannot actually compare or aggregate utility between two agents (their utilities being not actually comparable on the same axis, or alternately being in ‘different units’), and on the grounds that human behavior does not satisfy the logical axioms required for us to be said to have a utility function.
Well you can make such comparisons if you allow for empathic preferences (imagine placing yourself in someone else’s position, and ask how good or bad that would be, relative to some other position). Also the fact that human behavior doesn’t perfectly fit a utility function is not in itself a huge issue: just apply a best fit function (this is the “revealed preference” approach to utility).
Ken Binmore has a rather good paper on this topic, see here.