True, if Gandhi’s other-regarding preferences are sufficiently different from Ted Bundy’s self-regarding preferences, than Gandhi will be better off according to his total preferences if we maximize the sum of their total preferences instead of the sum of their self-regarding preferences.
Of course, all this only makes any sense if we’re talking about an aggregation used by some other agent. Presumably Gandhi himself would not adopt an aggregation that makes him worse off according to his total preferences.
How do you distinguish between “selfish” and “non-selfish” utilities, though?
Someone who has both selfish and non-selfish utilities has to have some answer to this, but there are many possible solutions, and which solution you “should” use depends on what you care about. In the iterative convergence scenario you described in the original post, you implicitly assumed that the utilitarian agent already had a solution to this. After all, the agent started with some preferences before updating its utility function to account for the wellbeing of others. That makes it pretty easy, the agent could just declare that its preferences before the first iteration were its selfish preferences, and the preferences added in the first iteration were its non-selfish preferences, thus justifying stopping after one iteration, just as you would intuitively expect. Or maybe the agent will do something different (if it arrived at its preferences by some route other then starting with selfish preferences and adding in non-selfish preferences, then I guess it would have to do something different). There are A LOT of ways an agent could partition its preferences into selfish and non-selfish components. What do you want me to do? Pick one and tell you that it’s the correct one? But then what about all the agents that partition their preferences into selfish and non-selfish components in a completely different manner that still seems reasonable?
True, if Gandhi’s other-regarding preferences are sufficiently different from Ted Bundy’s self-regarding preferences, than Gandhi will be better off according to his total preferences if we maximize the sum of their total preferences instead of the sum of their self-regarding preferences.
Of course, all this only makes any sense if we’re talking about an aggregation used by some other agent. Presumably Gandhi himself would not adopt an aggregation that makes him worse off according to his total preferences.
Someone who has both selfish and non-selfish utilities has to have some answer to this, but there are many possible solutions, and which solution you “should” use depends on what you care about. In the iterative convergence scenario you described in the original post, you implicitly assumed that the utilitarian agent already had a solution to this. After all, the agent started with some preferences before updating its utility function to account for the wellbeing of others. That makes it pretty easy, the agent could just declare that its preferences before the first iteration were its selfish preferences, and the preferences added in the first iteration were its non-selfish preferences, thus justifying stopping after one iteration, just as you would intuitively expect. Or maybe the agent will do something different (if it arrived at its preferences by some route other then starting with selfish preferences and adding in non-selfish preferences, then I guess it would have to do something different). There are A LOT of ways an agent could partition its preferences into selfish and non-selfish components. What do you want me to do? Pick one and tell you that it’s the correct one? But then what about all the agents that partition their preferences into selfish and non-selfish components in a completely different manner that still seems reasonable?