Also, as for most people, the happiness of the model utilitarians is correlated with their utility.
This is untrue in general. I would prefer that someone who I am unaware of be happy, but it cannot make me happier since I am unaware of that person. In general, it is important to draw a distinction between the concept of a utility function, which describes decisions being made, and that of a hedonic function, which describes happiness, or, if you are not purely a hedonic utilitarian, whatever functions describe other things that are mentioned in, but not identical to, your utility function.
Yes, I may not know the exact value of my utility since I don’t know the value of every argument it takes, and yes, there are consequently changes in utility which aren’t accompanied with corresponding changes in happiness, but no, this doesn’t mean that utility and happiness aren’t correlated. Your comment would be a valid objection to relevance of my original question only if happiness and utility were strictly isolated and independent of each other, which, for most people, isn’t the case.
Also, this whole issue could be sidestepped if the utility function of the first agent had the utility of the second agent as argument directly, without the intermediation of happiness. I am not sure, however, whether standard utilitarianism allows caring about other agent’s utilities.
There may be many people who’s utility you are not aware of, but there are also many people whos utility you are aware of, and whos utility you can effect with your actions. I think @prase points are quite interesting just considering the ones in your awareness/ sphere of influence.
I’m not sure exactly why prase disagrees with me—I can think of many mutually exclusive reasons that it would take a while to write out individually—but since two people have now responded I guess I should ask for clarification. Why is the scenario described impossible?
This is untrue in general. I would prefer that someone who I am unaware of be happy, but it cannot make me happier since I am unaware of that person. In general, it is important to draw a distinction between the concept of a utility function, which describes decisions being made, and that of a hedonic function, which describes happiness, or, if you are not purely a hedonic utilitarian, whatever functions describe other things that are mentioned in, but not identical to, your utility function.
Yes, I may not know the exact value of my utility since I don’t know the value of every argument it takes, and yes, there are consequently changes in utility which aren’t accompanied with corresponding changes in happiness, but no, this doesn’t mean that utility and happiness aren’t correlated. Your comment would be a valid objection to relevance of my original question only if happiness and utility were strictly isolated and independent of each other, which, for most people, isn’t the case.
Also, this whole issue could be sidestepped if the utility function of the first agent had the utility of the second agent as argument directly, without the intermediation of happiness. I am not sure, however, whether standard utilitarianism allows caring about other agent’s utilities.
There may be many people who’s utility you are not aware of, but there are also many people whos utility you are aware of, and whos utility you can effect with your actions. I think @prase points are quite interesting just considering the ones in your awareness/ sphere of influence.
I’m not sure exactly why prase disagrees with me—I can think of many mutually exclusive reasons that it would take a while to write out individually—but since two people have now responded I guess I should ask for clarification. Why is the scenario described impossible?