Not at all. We’re all just trying to maximize our own utilions. My utility function has a term int it for other people’s happiness. Maybe it has a term for other people’s utilions (I’m not sure about that one though). But when I say I want to maximize utility, I’m just maximizing one utility function: mine. Consideration for others is already factored in.
Seconded. It seems to me that what’s universally accepted is that rationality is maximizing some utility function, which might not be the sum/average of happiness/preference-satisfaction of individuals. I don’t know if there’s a commonly-used term for this. “Consequentialism” is close and is probably preferable to “utilitarianism”, but seems to actually be a superset of the view I’m referring to, including things like rule-consequentialism.
Seconded. It seems to me that what’s universally accepted is that rationality is maximizing some utility function, which might not be the sum/average of happiness/preference-satisfaction of individuals. I don’t know if there’s a commonly-used term for this. “Consequentialism” is close and is probably preferable to “utilitarianism”, but seems to actually be a superset of the view I’m referring to, including things like rule-consequentialism.