Right, which is as it should be. However, say V1 is 0. Then In the model I favor, U>U1 if a,b>0, but U=U1 if ab=0, while in the model you favor, U=U1 in both cases. I believe the former corresponds better to reality, because, essentially, happiness is better when shared: you get to enjoy the other person being happy because you’re happy.
Right, which is as it should be. However, say V1 is 0. Then In the model I favor, U>U1 if a,b>0, but U=U1 if ab=0, while in the model you favor, U=U1 in both cases. I believe the former corresponds better to reality, because, essentially, happiness is better when shared: you get to enjoy the other person being happy because you’re happy.
Be careful. U1 means something different in our two models. In the model you favor, U1 represents how much the Jane cares her own selfish desires before taking into account the fact that she cares about all of Bob’s desires and that Bob also cares about her selfish desires. In the model I favor, U1 represents how much Jane cares about her own selfish desires after taking everything into account. That the two models say something different about the relationship between U1 and U is no surprise because they define U1 differently.
My proof showed that the two models are equivalent—they represent exactly the same set of preferences. Anything that you model does, my model does, modulo the definition of terms.
However, say V1 is 0. Then In the model I favor, U>U1 if a,b>0,
Also, this is false. Your model says:
U = U1 + aV
V = V1 + bU
Since V1 = 0, we have V = bU. Thus U = U1 + abU, so assuming 0 < ab < 1, this gives U = U1 / (1 - ab). Now U1 can be negative, in which case U < U1.
You’re right about U1 being negative: I meant to say |U|>|U1|, unless they’re both 0.
If you only compare situations with the same a and b values to each other, then yes, the models do yield the same results, but it seems that comparing situations with varying a and b is relevant.
I agree that U1 means something different in each model, and you can of course choose values of U1 such that you force the predictions of one model to agree with the other. I prefer to define U1 as just your selfish desires because that way, only the empathy coefficients change when the people you’re associated with change: you don’t have to change your utilities on every single action.
If you only compare situations with the same a and b values to each other, then yes, the models do yield the same results, but it seems that comparing situations with varying a and b is relevant.
So you want to compare my model with one set of values for (a,b) to your model with another set of values, then say they’re different?
That’s true unless you compare cases where the two people are together to when they are apart (a=b=0).
I don’t follow. When a=b=0, U=U1 and V=V1 for both models.
Right, which is as it should be. However, say V1 is 0. Then In the model I favor, U>U1 if a,b>0, but U=U1 if ab=0, while in the model you favor, U=U1 in both cases. I believe the former corresponds better to reality, because, essentially, happiness is better when shared: you get to enjoy the other person being happy because you’re happy.
Be careful. U1 means something different in our two models. In the model you favor, U1 represents how much the Jane cares her own selfish desires before taking into account the fact that she cares about all of Bob’s desires and that Bob also cares about her selfish desires. In the model I favor, U1 represents how much Jane cares about her own selfish desires after taking everything into account. That the two models say something different about the relationship between U1 and U is no surprise because they define U1 differently.
My proof showed that the two models are equivalent—they represent exactly the same set of preferences. Anything that you model does, my model does, modulo the definition of terms.
Also, this is false. Your model says:
U = U1 + aV
V = V1 + bU
Since V1 = 0, we have V = bU. Thus U = U1 + abU, so assuming 0 < ab < 1, this gives U = U1 / (1 - ab). Now U1 can be negative, in which case U < U1.
You’re right about U1 being negative: I meant to say |U|>|U1|, unless they’re both 0.
If you only compare situations with the same a and b values to each other, then yes, the models do yield the same results, but it seems that comparing situations with varying a and b is relevant.
I agree that U1 means something different in each model, and you can of course choose values of U1 such that you force the predictions of one model to agree with the other. I prefer to define U1 as just your selfish desires because that way, only the empathy coefficients change when the people you’re associated with change: you don’t have to change your utilities on every single action.
So you want to compare my model with one set of values for (a,b) to your model with another set of values, then say they’re different?