You’re right about U1 being negative: I meant to say |U|>|U1|, unless they’re both 0.
If you only compare situations with the same a and b values to each other, then yes, the models do yield the same results, but it seems that comparing situations with varying a and b is relevant.
I agree that U1 means something different in each model, and you can of course choose values of U1 such that you force the predictions of one model to agree with the other. I prefer to define U1 as just your selfish desires because that way, only the empathy coefficients change when the people you’re associated with change: you don’t have to change your utilities on every single action.
If you only compare situations with the same a and b values to each other, then yes, the models do yield the same results, but it seems that comparing situations with varying a and b is relevant.
So you want to compare my model with one set of values for (a,b) to your model with another set of values, then say they’re different?
You’re right about U1 being negative: I meant to say |U|>|U1|, unless they’re both 0.
If you only compare situations with the same a and b values to each other, then yes, the models do yield the same results, but it seems that comparing situations with varying a and b is relevant.
I agree that U1 means something different in each model, and you can of course choose values of U1 such that you force the predictions of one model to agree with the other. I prefer to define U1 as just your selfish desires because that way, only the empathy coefficients change when the people you’re associated with change: you don’t have to change your utilities on every single action.
So you want to compare my model with one set of values for (a,b) to your model with another set of values, then say they’re different?