Utilitarianism doesn’t use any particular utility function. It merely advocates acting based on an aggregation of pre-existing utility functions. So whether or not someone’s utility function is stupid is not something utilitarianism can control. If people in general have stupid utility functions, then preference utilitarianism will advocate stupid things.
In any case, the problem I was hinting at in the grandparent is known in the literature (following Rawls) as “utilitarianism doesn’t respect the separateness of persons.” For utilitarianism, what fundamentally matters is utility (however that is measured), and people are essentially just vessels for utility. If it’s possible to substantially increase the amount of utility in many of those vessels while substantially decreasing it in just one vessel, then utilitarianism will recommend doing that. After all, the individual vessels themselves don’t matter, just the amount of utility sloshing about (or, if you’re an average utilitarian, the number of vessels matters, but the vessels don’t matter beyond that). An extreme consequence of this kind of thinking is the whole “utility monster” problem, but it arises in slightly less fanciful contexts as well (kill the hermit, push the fat man in front of the trolley).
I fundamentally reject this mode of thinking. Morality should be concerned with how individuals, considered as individuals, are treated. This doesn’t mean that trade-offs between peoples’ rights/well-being/whatever are always ruled out, but they shouldn’t be as easy as they are under utilitarianism. There are concerns about things like rights, fairness and equity that matter morally, and that utilitarianism can’t capture, at least not without relying on convoluted (and often implausibly convenient) justifications about how behaving in ways we intuitively endorse will somehow end up maximizing utility in the long run.
Ah, in that specific sort of situation, I imagine hedonic (as opposed to preference) utilitarians would say that yes, Eve has done a good thing.
If you’re asking me, I’d say no, but I’m not a utilitarian, partly because utilitarianism answers “yes” to questions similar to this one.
Only if you use a stupid utility function.
Utilitarianism doesn’t use any particular utility function. It merely advocates acting based on an aggregation of pre-existing utility functions. So whether or not someone’s utility function is stupid is not something utilitarianism can control. If people in general have stupid utility functions, then preference utilitarianism will advocate stupid things.
In any case, the problem I was hinting at in the grandparent is known in the literature (following Rawls) as “utilitarianism doesn’t respect the separateness of persons.” For utilitarianism, what fundamentally matters is utility (however that is measured), and people are essentially just vessels for utility. If it’s possible to substantially increase the amount of utility in many of those vessels while substantially decreasing it in just one vessel, then utilitarianism will recommend doing that. After all, the individual vessels themselves don’t matter, just the amount of utility sloshing about (or, if you’re an average utilitarian, the number of vessels matters, but the vessels don’t matter beyond that). An extreme consequence of this kind of thinking is the whole “utility monster” problem, but it arises in slightly less fanciful contexts as well (kill the hermit, push the fat man in front of the trolley).
I fundamentally reject this mode of thinking. Morality should be concerned with how individuals, considered as individuals, are treated. This doesn’t mean that trade-offs between peoples’ rights/well-being/whatever are always ruled out, but they shouldn’t be as easy as they are under utilitarianism. There are concerns about things like rights, fairness and equity that matter morally, and that utilitarianism can’t capture, at least not without relying on convoluted (and often implausibly convenient) justifications about how behaving in ways we intuitively endorse will somehow end up maximizing utility in the long run.
Yes, I should have rephrased that as ‘Only because hedonic utilitarianism is stupid’—how’s that?