But in practice with humans, people tend to automatically separate their desires into “things for me” and “things for others”.
Separating preferences that way would make preference utilitarianism even more unattractive than it already is, I think. Critics already complain about the preferences of Gandhi and Ted Bundy getting equal weight. Under this patched scheme, Gandhi actually gets less weight than Ted Bundy because many of his preferences (the ones we admire the most, the other-regarding ones) don’t count when we’re aggregating, whereas Ted Bundy (who for the sake of argument only has selfish preferences) incurs no such penalty.
If you restrict the utilities being aggregated to “selfish” utilities, then in general, even though the utility functions of altruists are not being properly represented, altruists will still be better off than they would be in a more neutral aggregation. For instance, suppose Gandhi and Ted Bundy have “selfish” utility functions S_G and S_B respectively, and “actual” utility functions U_G and U_B. Since Gandhi is an altruist, U_G = S_G + S_B. Since Ted Bundy is selfish, U_B = S_B. If you aggregate by maximizing the sum of the selfish utility functions, then you are maximizing S_G + S_B, which is exactly the same as Gandhi’s actual utility function, so this is Gandhi’s most preferred outcome. If you maximize U_G + U_B, then the aggregation ends up worse for Gandhi according to his actual preferences, even though the only change was to make the representation of his preferences for the aggregation more accurate.
There seem to be two different notions of “selfish” utilities in play here. One is “pre-update” utility, i.e. the utility function as it is prior to being modified by preference utilitarianism (or some other altruistic algorithm). That seems to be the interpretation you’re using here, and the one I was using in this comment.
Oscar_Cunningham, in his response, seemed to be using a different notion though. He identified “selfish” utility as “things for me” desires. I understood this to mean purely self-regarding desires (e.g. “I want a cheeseburger” rather than “I want the hungry to be fed”). This is an orthogonal notion. Preferences that are “non-selfish” in this sense (i.e. other-regarding) can be “selfish” in the sense you’re using (i.e. they can be pre-update).
The comment you were responding to was employing Oscar_Cunningham’s notion of selfishness (or at least my interpretation of his position, which might well be wrong), so what you say doesn’t apply. In particular, with this notion of selfishness, UG will not simply equal SG + SB, since Gandhi’s other-regarding goals are not identical to Ted Bundy’s self-regarding goals. For instance, Gandhi could want Ted Bundy to achieve spiritual salvation even though Bundy doesn’t want this for himself. In that case, ignoring “unselfish” desires would simply mean that some of Gandhi’s desires don’t count at all.
I agree with the point you’re making if we use the “pre-update” notion of selfishness, but then I think my objection in this comment still applies.
True, if Gandhi’s other-regarding preferences are sufficiently different from Ted Bundy’s self-regarding preferences, than Gandhi will be better off according to his total preferences if we maximize the sum of their total preferences instead of the sum of their self-regarding preferences.
Of course, all this only makes any sense if we’re talking about an aggregation used by some other agent. Presumably Gandhi himself would not adopt an aggregation that makes him worse off according to his total preferences.
How do you distinguish between “selfish” and “non-selfish” utilities, though?
Someone who has both selfish and non-selfish utilities has to have some answer to this, but there are many possible solutions, and which solution you “should” use depends on what you care about. In the iterative convergence scenario you described in the original post, you implicitly assumed that the utilitarian agent already had a solution to this. After all, the agent started with some preferences before updating its utility function to account for the wellbeing of others. That makes it pretty easy, the agent could just declare that its preferences before the first iteration were its selfish preferences, and the preferences added in the first iteration were its non-selfish preferences, thus justifying stopping after one iteration, just as you would intuitively expect. Or maybe the agent will do something different (if it arrived at its preferences by some route other then starting with selfish preferences and adding in non-selfish preferences, then I guess it would have to do something different). There are A LOT of ways an agent could partition its preferences into selfish and non-selfish components. What do you want me to do? Pick one and tell you that it’s the correct one? But then what about all the agents that partition their preferences into selfish and non-selfish components in a completely different manner that still seems reasonable?
Separating preferences that way would make preference utilitarianism even more unattractive than it already is, I think. Critics already complain about the preferences of Gandhi and Ted Bundy getting equal weight. Under this patched scheme, Gandhi actually gets less weight than Ted Bundy because many of his preferences (the ones we admire the most, the other-regarding ones) don’t count when we’re aggregating, whereas Ted Bundy (who for the sake of argument only has selfish preferences) incurs no such penalty.
If you restrict the utilities being aggregated to “selfish” utilities, then in general, even though the utility functions of altruists are not being properly represented, altruists will still be better off than they would be in a more neutral aggregation. For instance, suppose Gandhi and Ted Bundy have “selfish” utility functions S_G and S_B respectively, and “actual” utility functions U_G and U_B. Since Gandhi is an altruist, U_G = S_G + S_B. Since Ted Bundy is selfish, U_B = S_B. If you aggregate by maximizing the sum of the selfish utility functions, then you are maximizing S_G + S_B, which is exactly the same as Gandhi’s actual utility function, so this is Gandhi’s most preferred outcome. If you maximize U_G + U_B, then the aggregation ends up worse for Gandhi according to his actual preferences, even though the only change was to make the representation of his preferences for the aggregation more accurate.
There seem to be two different notions of “selfish” utilities in play here. One is “pre-update” utility, i.e. the utility function as it is prior to being modified by preference utilitarianism (or some other altruistic algorithm). That seems to be the interpretation you’re using here, and the one I was using in this comment.
Oscar_Cunningham, in his response, seemed to be using a different notion though. He identified “selfish” utility as “things for me” desires. I understood this to mean purely self-regarding desires (e.g. “I want a cheeseburger” rather than “I want the hungry to be fed”). This is an orthogonal notion. Preferences that are “non-selfish” in this sense (i.e. other-regarding) can be “selfish” in the sense you’re using (i.e. they can be pre-update).
The comment you were responding to was employing Oscar_Cunningham’s notion of selfishness (or at least my interpretation of his position, which might well be wrong), so what you say doesn’t apply. In particular, with this notion of selfishness, UG will not simply equal SG + SB, since Gandhi’s other-regarding goals are not identical to Ted Bundy’s self-regarding goals. For instance, Gandhi could want Ted Bundy to achieve spiritual salvation even though Bundy doesn’t want this for himself. In that case, ignoring “unselfish” desires would simply mean that some of Gandhi’s desires don’t count at all.
I agree with the point you’re making if we use the “pre-update” notion of selfishness, but then I think my objection in this comment still applies.
Does this seem right?
True, if Gandhi’s other-regarding preferences are sufficiently different from Ted Bundy’s self-regarding preferences, than Gandhi will be better off according to his total preferences if we maximize the sum of their total preferences instead of the sum of their self-regarding preferences.
Of course, all this only makes any sense if we’re talking about an aggregation used by some other agent. Presumably Gandhi himself would not adopt an aggregation that makes him worse off according to his total preferences.
Someone who has both selfish and non-selfish utilities has to have some answer to this, but there are many possible solutions, and which solution you “should” use depends on what you care about. In the iterative convergence scenario you described in the original post, you implicitly assumed that the utilitarian agent already had a solution to this. After all, the agent started with some preferences before updating its utility function to account for the wellbeing of others. That makes it pretty easy, the agent could just declare that its preferences before the first iteration were its selfish preferences, and the preferences added in the first iteration were its non-selfish preferences, thus justifying stopping after one iteration, just as you would intuitively expect. Or maybe the agent will do something different (if it arrived at its preferences by some route other then starting with selfish preferences and adding in non-selfish preferences, then I guess it would have to do something different). There are A LOT of ways an agent could partition its preferences into selfish and non-selfish components. What do you want me to do? Pick one and tell you that it’s the correct one? But then what about all the agents that partition their preferences into selfish and non-selfish components in a completely different manner that still seems reasonable?