We should give everybody as much utilions as we can
Not at all. We’re all just trying to maximize our own utilions. My utility
function has a term int it for other people’s happiness. Maybe it has a term
for other people’s utilions (I’m not sure about that one though). But when I
say I want to maximize utility, I’m just maximizing one utility function: mine.
Consideration for others is already factored in.
In fact I think you’re confusing two different topics: decision theory and
ethics. Decision theory tells us how to get more of what we want (including the
happiness of others). Decision theory takes the utility function as a given.
Ethics is about figuring out the what the actual content of our utility
functions is, especially as it concerns our interactions with others, and our
obligations towards them.
Not at all. We’re all just trying to maximize our own utilions. My utility function has a term int it for other people’s happiness. Maybe it has a term for other people’s utilions (I’m not sure about that one though). But when I say I want to maximize utility, I’m just maximizing one utility function: mine. Consideration for others is already factored in.
Seconded. It seems to me that what’s universally accepted is that rationality is maximizing some utility function, which might not be the sum/average of happiness/preference-satisfaction of individuals. I don’t know if there’s a commonly-used term for this. “Consequentialism” is close and is probably preferable to “utilitarianism”, but seems to actually be a superset of the view I’m referring to, including things like rule-consequentialism.
Not at all. We’re all just trying to maximize our own utilions. My utility function has a term int it for other people’s happiness. Maybe it has a term for other people’s utilions (I’m not sure about that one though). But when I say I want to maximize utility, I’m just maximizing one utility function: mine. Consideration for others is already factored in.
Thirded. I would add that my utility function need not have a term for your utility function in it’s entirety. If you intrinsically like murdering small children, there’s no positive term in my utility function for that. Not all of your values matter to me.
Not at all. We’re all just trying to maximize our own utilions. My utility function has a term int it for other people’s happiness. Maybe it has a term for other people’s utilions (I’m not sure about that one though). But when I say I want to maximize utility, I’m just maximizing one utility function: mine. Consideration for others is already factored in.
In fact I think you’re confusing two different topics: decision theory and ethics. Decision theory tells us how to get more of what we want (including the happiness of others). Decision theory takes the utility function as a given. Ethics is about figuring out the what the actual content of our utility functions is, especially as it concerns our interactions with others, and our obligations towards them.
Seconded. It seems to me that what’s universally accepted is that rationality is maximizing some utility function, which might not be the sum/average of happiness/preference-satisfaction of individuals. I don’t know if there’s a commonly-used term for this. “Consequentialism” is close and is probably preferable to “utilitarianism”, but seems to actually be a superset of the view I’m referring to, including things like rule-consequentialism.
Thirded. I would add that my utility function need not have a term for your utility function in it’s entirety. If you intrinsically like murdering small children, there’s no positive term in my utility function for that. Not all of your values matter to me.