I believe this doesn’t answer my question; I will reformulate the problem in order to remove potentially problematic words and make it more specific:
Let the world contain at least two persons, P1 and P2 with utility functions U1 and U2. Both are traditional utilitarians: they value happiness of the others. Assume that U1 is a sum of two terms: H2 + u1(X), where H2 is some measure of happiness of P2 and u1(X) represents P1′s utility unrelated to P2′s happiness, X is the state of the rest of the world; similarly U2 = H1 + u2(X). (H1 and H2 are monotonous functions of happiness but not necessarily linear—whatever it would even mean—so having U as linear function of H is still quite general.)
Also, as for most people, the happiness of the model utilitarians is correlated with their utility. Let’s again assume that the utilities decompose into sums of independent terms such that H1 = h1(U1) + w1(X), where w contains all non-utility sources of happiness and h1(.) is a growing function; similarly for the second agent.
So we have:
U1 = h2(U2) + w2(X) + u1(X)
U2 = h1(U1) + w1(X) + u2(X)
Whether this does or doesn’t have solution (for U1 and U2) depends on details of h1, h2, u1, u2, w1, w2 and X. But what I say is that the system of equations is a direct analogue of the forbidden
U = h(U) + u(X)
i.e. when one’s utility function takes itself for an argument.
Also, as for most people, the happiness of the model utilitarians is correlated with their utility.
This is untrue in general. I would prefer that someone who I am unaware of be happy, but it cannot make me happier since I am unaware of that person. In general, it is important to draw a distinction between the concept of a utility function, which describes decisions being made, and that of a hedonic function, which describes happiness, or, if you are not purely a hedonic utilitarian, whatever functions describe other things that are mentioned in, but not identical to, your utility function.
Yes, I may not know the exact value of my utility since I don’t know the value of every argument it takes, and yes, there are consequently changes in utility which aren’t accompanied with corresponding changes in happiness, but no, this doesn’t mean that utility and happiness aren’t correlated. Your comment would be a valid objection to relevance of my original question only if happiness and utility were strictly isolated and independent of each other, which, for most people, isn’t the case.
Also, this whole issue could be sidestepped if the utility function of the first agent had the utility of the second agent as argument directly, without the intermediation of happiness. I am not sure, however, whether standard utilitarianism allows caring about other agent’s utilities.
There may be many people who’s utility you are not aware of, but there are also many people whos utility you are aware of, and whos utility you can effect with your actions. I think @prase points are quite interesting just considering the ones in your awareness/ sphere of influence.
I’m not sure exactly why prase disagrees with me—I can think of many mutually exclusive reasons that it would take a while to write out individually—but since two people have now responded I guess I should ask for clarification. Why is the scenario described impossible?
Imagine that everyone starts at time t1 with some level of utility, U[n]. Now, they generate a utility based on their beliefs about the sum of everyone else’s utility (at time t1). Then they update by adding some function of that summed (averaged, whatever) utility to their own happiness. Let’s assume that function is some variant of the sigmoid function. This is actually probably not too far off from reality. Now we know that the maximum happiness (from the utility of others) that a person can have is one (and the minimum is negative one). And assuming that most people’s base level of happiness is somewhat larger than the effect of utility, this is going to be a reasonably stable system.
This is a much more reasonable model, since we live in a time-varying world, and our beliefs about that world change over time as we gain more information.
When information propagates fast relative to the rate of change of external conditions, the dynamic model converges to the stable point which would be the solution of the static model—are the models really different in any important aspect?
Instability is indeed eliminated by use of sigmoid functions, but then the utility gained from happiness (of others) is bounded. Bounded utility functions solve many problems, the “repugnant conclusion” of the OP included, but some prominent LWers object to their use, pointing out scope insensitivity. (I have personally no problems with bounded utilities.)
I believe this doesn’t answer my question; I will reformulate the problem in order to remove potentially problematic words and make it more specific:
Let the world contain at least two persons, P1 and P2 with utility functions U1 and U2. Both are traditional utilitarians: they value happiness of the others. Assume that U1 is a sum of two terms: H2 + u1(X), where H2 is some measure of happiness of P2 and u1(X) represents P1′s utility unrelated to P2′s happiness, X is the state of the rest of the world; similarly U2 = H1 + u2(X). (H1 and H2 are monotonous functions of happiness but not necessarily linear—whatever it would even mean—so having U as linear function of H is still quite general.)
Also, as for most people, the happiness of the model utilitarians is correlated with their utility. Let’s again assume that the utilities decompose into sums of independent terms such that H1 = h1(U1) + w1(X), where w contains all non-utility sources of happiness and h1(.) is a growing function; similarly for the second agent.
So we have:
U1 = h2(U2) + w2(X) + u1(X)
U2 = h1(U1) + w1(X) + u2(X)
Whether this does or doesn’t have solution (for U1 and U2) depends on details of h1, h2, u1, u2, w1, w2 and X. But what I say is that the system of equations is a direct analogue of the forbidden
U = h(U) + u(X)
i.e. when one’s utility function takes itself for an argument.
This is untrue in general. I would prefer that someone who I am unaware of be happy, but it cannot make me happier since I am unaware of that person. In general, it is important to draw a distinction between the concept of a utility function, which describes decisions being made, and that of a hedonic function, which describes happiness, or, if you are not purely a hedonic utilitarian, whatever functions describe other things that are mentioned in, but not identical to, your utility function.
Yes, I may not know the exact value of my utility since I don’t know the value of every argument it takes, and yes, there are consequently changes in utility which aren’t accompanied with corresponding changes in happiness, but no, this doesn’t mean that utility and happiness aren’t correlated. Your comment would be a valid objection to relevance of my original question only if happiness and utility were strictly isolated and independent of each other, which, for most people, isn’t the case.
Also, this whole issue could be sidestepped if the utility function of the first agent had the utility of the second agent as argument directly, without the intermediation of happiness. I am not sure, however, whether standard utilitarianism allows caring about other agent’s utilities.
There may be many people who’s utility you are not aware of, but there are also many people whos utility you are aware of, and whos utility you can effect with your actions. I think @prase points are quite interesting just considering the ones in your awareness/ sphere of influence.
I’m not sure exactly why prase disagrees with me—I can think of many mutually exclusive reasons that it would take a while to write out individually—but since two people have now responded I guess I should ask for clarification. Why is the scenario described impossible?
Here’s another way to look at it:
Imagine that everyone starts at time t1 with some level of utility, U[n]. Now, they generate a utility based on their beliefs about the sum of everyone else’s utility (at time t1). Then they update by adding some function of that summed (averaged, whatever) utility to their own happiness. Let’s assume that function is some variant of the sigmoid function. This is actually probably not too far off from reality. Now we know that the maximum happiness (from the utility of others) that a person can have is one (and the minimum is negative one). And assuming that most people’s base level of happiness is somewhat larger than the effect of utility, this is going to be a reasonably stable system.
This is a much more reasonable model, since we live in a time-varying world, and our beliefs about that world change over time as we gain more information.
When information propagates fast relative to the rate of change of external conditions, the dynamic model converges to the stable point which would be the solution of the static model—are the models really different in any important aspect?
Instability is indeed eliminated by use of sigmoid functions, but then the utility gained from happiness (of others) is bounded. Bounded utility functions solve many problems, the “repugnant conclusion” of the OP included, but some prominent LWers object to their use, pointing out scope insensitivity. (I have personally no problems with bounded utilities.)
Utility functions need not be bounded, so long as their contribution to happiness is bounded.