This was going to be a comment on RichardKennaway’s comment, but I thought it deserved to be top-level since it addressed more than just Richard’s point.
Your setup is U = U1 + aV and V = V1 + aU. Richard’s is U = U1 + aV1 and V = V1 + bU1.
In words, U1 is U’s selfish preference and V1 is V’s selfish preference. aV is U’s altruistic preference in your model and aV1 is U’s altruistic preference in Richard’s model. The case is analogous for V’s preferences. The difference is that in your model, agents have altruistic preferences over the other agent’s full preferences while in Richard’s model, agents have altruistic preferences over the other agent’s selfish preferences. You might think your model more interesting or general in some sense, since agents have preferences over other agents’ full preferences. Actually, Richard’s model is more general and it’s fairly easy to see. The key is that as long as a,b<1 (we assume that they’re both >0 in both models so that both agents prefer to help each other), then the two models are equivalent.
To see this, let’s focus on your model, with a,b<1. CCC’s comment below shows that:
U = (U1 + aV1) / (1 - ab)
V = (V1 + bU1) / (1 - ab)
You can see this another way: U = U1 + aV and V = V1 + bU. By recursively substituting, we have:
U = U1 + aV
U = U1 + aV1 + abU
U = U1 + aV1 + abU1 + a^2bV
U = U1 + aV1 + abU1 + a^2bV1 + a^2b^2U
...
U = (U1 + aV1) * sum_(k=0 to infinity) (ab)^k
Now the series at the end is geometric, so as long as ab < 1, it converges (otherwise it diverges) so we get
U = (U1 + aV1) / (1 - ab)
and by symmetry
V = (V1 + aU1) / (1 - ab)
Now you might think that this is different from Richard’s model because of the scaling factor, 1/(1-ab), but utility functions are only uniquely defined up to affine (i.e. positive linear) transformations, so we can drop this scaling factor from both utility functions and have it represent exactly the same preferences—that, presented with any given choice, any positive constant as the scaling factor will yield exactly the same preference ranking over the choices and lotteries over the choices. Thus in your case and Richard’s, the utility functions are the same.
Now to see that Richard’s is more general than yours, just note that in Richard’s case, we can set a or b as large as we want without paradox (representing a case where an agent care more about the other agent than itself), while your framework won’t allow for that without everyone always have inifinite (in absolute value) utility.
Despite the fact that Richard’s is more flexible, it seems to me that GuySrinivasan’s is more accurate. Maia and I have toyed with this idea for a while: I can be made happy because Maia is happy because I am happy, not just when Maia’s happy for herself. You could argue that you should just factor this in as a multiplier on my internal utility (as opposed to that I get from Maia), but it only happens when she’s around, so...
I suspect a less elegant but more accurate solution is to bound the utility you get from external sources, or to bound the utility you get that’s reflected more than once, because I agree that ab<1 is a tricky constraint.
Despite the fact that Richard’s is more flexible, it seems to me that GuySrinivasan’s is more accurate.
In what sense is GuySrinivasan’s more accurate? If ab < 1, the two models yield exactly the same preference relations. Guy may start buy explicitly modeling the behavior that you want to capture, but since the two models are equivalent, that behavior is implicit in Richard’s model.
Right, which is as it should be. However, say V1 is 0. Then In the model I favor, U>U1 if a,b>0, but U=U1 if ab=0, while in the model you favor, U=U1 in both cases. I believe the former corresponds better to reality, because, essentially, happiness is better when shared: you get to enjoy the other person being happy because you’re happy.
Right, which is as it should be. However, say V1 is 0. Then In the model I favor, U>U1 if a,b>0, but U=U1 if ab=0, while in the model you favor, U=U1 in both cases. I believe the former corresponds better to reality, because, essentially, happiness is better when shared: you get to enjoy the other person being happy because you’re happy.
Be careful. U1 means something different in our two models. In the model you favor, U1 represents how much the Jane cares her own selfish desires before taking into account the fact that she cares about all of Bob’s desires and that Bob also cares about her selfish desires. In the model I favor, U1 represents how much Jane cares about her own selfish desires after taking everything into account. That the two models say something different about the relationship between U1 and U is no surprise because they define U1 differently.
My proof showed that the two models are equivalent—they represent exactly the same set of preferences. Anything that you model does, my model does, modulo the definition of terms.
However, say V1 is 0. Then In the model I favor, U>U1 if a,b>0,
Also, this is false. Your model says:
U = U1 + aV
V = V1 + bU
Since V1 = 0, we have V = bU. Thus U = U1 + abU, so assuming 0 < ab < 1, this gives U = U1 / (1 - ab). Now U1 can be negative, in which case U < U1.
You’re right about U1 being negative: I meant to say |U|>|U1|, unless they’re both 0.
If you only compare situations with the same a and b values to each other, then yes, the models do yield the same results, but it seems that comparing situations with varying a and b is relevant.
I agree that U1 means something different in each model, and you can of course choose values of U1 such that you force the predictions of one model to agree with the other. I prefer to define U1 as just your selfish desires because that way, only the empathy coefficients change when the people you’re associated with change: you don’t have to change your utilities on every single action.
If you only compare situations with the same a and b values to each other, then yes, the models do yield the same results, but it seems that comparing situations with varying a and b is relevant.
So you want to compare my model with one set of values for (a,b) to your model with another set of values, then say they’re different?
This was going to be a comment on RichardKennaway’s comment, but I thought it deserved to be top-level since it addressed more than just Richard’s point.
Your setup is U = U1 + aV and V = V1 + aU. Richard’s is U = U1 + aV1 and V = V1 + bU1.
In words, U1 is U’s selfish preference and V1 is V’s selfish preference. aV is U’s altruistic preference in your model and aV1 is U’s altruistic preference in Richard’s model. The case is analogous for V’s preferences. The difference is that in your model, agents have altruistic preferences over the other agent’s full preferences while in Richard’s model, agents have altruistic preferences over the other agent’s selfish preferences. You might think your model more interesting or general in some sense, since agents have preferences over other agents’ full preferences. Actually, Richard’s model is more general and it’s fairly easy to see. The key is that as long as a,b<1 (we assume that they’re both >0 in both models so that both agents prefer to help each other), then the two models are equivalent.
To see this, let’s focus on your model, with a,b<1. CCC’s comment below shows that:
U = (U1 + aV1) / (1 - ab)
V = (V1 + bU1) / (1 - ab)
You can see this another way: U = U1 + aV and V = V1 + bU. By recursively substituting, we have:
U = U1 + aV
U = U1 + aV1 + abU
U = U1 + aV1 + abU1 + a^2bV
U = U1 + aV1 + abU1 + a^2bV1 + a^2b^2U
...
U = (U1 + aV1) * sum_(k=0 to infinity) (ab)^k
Now the series at the end is geometric, so as long as ab < 1, it converges (otherwise it diverges) so we get
U = (U1 + aV1) / (1 - ab)
and by symmetry
V = (V1 + aU1) / (1 - ab)
Now you might think that this is different from Richard’s model because of the scaling factor, 1/(1-ab), but utility functions are only uniquely defined up to affine (i.e. positive linear) transformations, so we can drop this scaling factor from both utility functions and have it represent exactly the same preferences—that, presented with any given choice, any positive constant as the scaling factor will yield exactly the same preference ranking over the choices and lotteries over the choices. Thus in your case and Richard’s, the utility functions are the same.
Now to see that Richard’s is more general than yours, just note that in Richard’s case, we can set a or b as large as we want without paradox (representing a case where an agent care more about the other agent than itself), while your framework won’t allow for that without everyone always have inifinite (in absolute value) utility.
Despite the fact that Richard’s is more flexible, it seems to me that GuySrinivasan’s is more accurate. Maia and I have toyed with this idea for a while: I can be made happy because Maia is happy because I am happy, not just when Maia’s happy for herself. You could argue that you should just factor this in as a multiplier on my internal utility (as opposed to that I get from Maia), but it only happens when she’s around, so...
I suspect a less elegant but more accurate solution is to bound the utility you get from external sources, or to bound the utility you get that’s reflected more than once, because I agree that ab<1 is a tricky constraint.
In what sense is GuySrinivasan’s more accurate? If ab < 1, the two models yield exactly the same preference relations. Guy may start buy explicitly modeling the behavior that you want to capture, but since the two models are equivalent, that behavior is implicit in Richard’s model.
That’s true unless you compare cases where the two people are together to when they are apart (a=b=0).
I don’t follow. When a=b=0, U=U1 and V=V1 for both models.
Right, which is as it should be. However, say V1 is 0. Then In the model I favor, U>U1 if a,b>0, but U=U1 if ab=0, while in the model you favor, U=U1 in both cases. I believe the former corresponds better to reality, because, essentially, happiness is better when shared: you get to enjoy the other person being happy because you’re happy.
Be careful. U1 means something different in our two models. In the model you favor, U1 represents how much the Jane cares her own selfish desires before taking into account the fact that she cares about all of Bob’s desires and that Bob also cares about her selfish desires. In the model I favor, U1 represents how much Jane cares about her own selfish desires after taking everything into account. That the two models say something different about the relationship between U1 and U is no surprise because they define U1 differently.
My proof showed that the two models are equivalent—they represent exactly the same set of preferences. Anything that you model does, my model does, modulo the definition of terms.
Also, this is false. Your model says:
U = U1 + aV
V = V1 + bU
Since V1 = 0, we have V = bU. Thus U = U1 + abU, so assuming 0 < ab < 1, this gives U = U1 / (1 - ab). Now U1 can be negative, in which case U < U1.
You’re right about U1 being negative: I meant to say |U|>|U1|, unless they’re both 0.
If you only compare situations with the same a and b values to each other, then yes, the models do yield the same results, but it seems that comparing situations with varying a and b is relevant.
I agree that U1 means something different in each model, and you can of course choose values of U1 such that you force the predictions of one model to agree with the other. I prefer to define U1 as just your selfish desires because that way, only the empathy coefficients change when the people you’re associated with change: you don’t have to change your utilities on every single action.
So you want to compare my model with one set of values for (a,b) to your model with another set of values, then say they’re different?