So, this gets at something that frequently confuses me when people start talking about personal utilities.
It seems that if I can reliably elicit the strength of my preferences for X and Y, and reliably predict how a given action will modify the X and Y in my environment, then I can reliably determine whether to perform that action, all else being equal. That seems just as true for X = “my happiness” and Y = “my partner’s happiness” as it is for X = “hot fudge” and Y = “peppermint”.
But you seem to be suggesting that that isn’t true… that in the first case, even if I know the strengths of my preferences for X and Y and how various possible actions lead to X and Y, there’s still another step (“adding the utilities”) that I have to perform before I can decide what actions to perform. Do I understand you right?
If so, can you say more about what exactly that step entails? That is… what is it you don’t know how to do here, and why do you want to do it?
You’re missing four letters. Call the strength of your preferences for X and Y A and B, and call your partner’s preferences for X and Y C and D. (This assumes that you and your partner both agree on your happiness measurements.)
I agree there’s a choice among available actions which maximizes AX+BY, and that there’s another choice that maximizes CX+DY. What I think is questionable is ascribing meaning to (A+C)X+(B+D)Y.
Notice there are an infinite number of A,B pairs that output the same action, and an infinite number of C,D pairs that output the same action, but when you put them together your choice of A,B and C,D pairs matters. What scaling to choose is also a point of contention, since it can alter actions.
So, we’re assuming here that there’s no problem comparing A and B, which means these valuations are normalized relative to some individual scale. The problem, as you say, is with the scaling factor between individuals. So it seems I end up with something like (AX + BY + FCX + FDY), where F is the value of my partner’s preferences relative to mine. Yes?
And as you say, there’s an infinite number of Fs and my choice of action depends on which F I pick.
And we’re rejecting the idea that F is simply the strength of my preference for my partner’s satisfaction. If that were the case, there’d be no problem calculating a result… though of course no guarantee that my partner and I would calculate the same result. Yes?
If so, I agree that that coming up with a correct value for F sure does seem like an intractable, and quite likely incoherent, problem.
Going back to the original statement… “an ethical rationalist’s goals in relationship-seeking should be to seek a relationship that creates maximal utility for both parties” seems to be saying F should approximate 1. Which is arbitrary, admittedly.
And we’re rejecting the idea that F is simply the strength of my preference for my partner’s satisfaction. If that were the case, there’d be no problem calculating a result… though of course no guarantee that my partner and I would calculate the same result. Yes?
Yes. If you and your partner agree- that is, A/B=C/D- then there’s no trouble. If you disagree, though, there’s no objectively correct way to determine the correct action.
Going back to the original statement… “an ethical rationalist’s goals in relationship-seeking should be to seek a relationship that creates maximal utility for both parties” seems to be saying F should approximate 1. Which is arbitrary, admittedly.
Possibly, though many cases with F=1 seem like things PhilosophyTutor would find unethical. It seems more meaningful to look at A and B.
So, this gets at something that frequently confuses me when people start talking about personal utilities.
It seems that if I can reliably elicit the strength of my preferences for X and Y, and reliably predict how a given action will modify the X and Y in my environment, then I can reliably determine whether to perform that action, all else being equal. That seems just as true for X = “my happiness” and Y = “my partner’s happiness” as it is for X = “hot fudge” and Y = “peppermint”.
But you seem to be suggesting that that isn’t true… that in the first case, even if I know the strengths of my preferences for X and Y and how various possible actions lead to X and Y, there’s still another step (“adding the utilities”) that I have to perform before I can decide what actions to perform. Do I understand you right?
If so, can you say more about what exactly that step entails? That is… what is it you don’t know how to do here, and why do you want to do it?
You’re missing four letters. Call the strength of your preferences for X and Y A and B, and call your partner’s preferences for X and Y C and D. (This assumes that you and your partner both agree on your happiness measurements.)
I agree there’s a choice among available actions which maximizes AX+BY, and that there’s another choice that maximizes CX+DY. What I think is questionable is ascribing meaning to (A+C)X+(B+D)Y.
Notice there are an infinite number of A,B pairs that output the same action, and an infinite number of C,D pairs that output the same action, but when you put them together your choice of A,B and C,D pairs matters. What scaling to choose is also a point of contention, since it can alter actions.
So, we’re assuming here that there’s no problem comparing A and B, which means these valuations are normalized relative to some individual scale. The problem, as you say, is with the scaling factor between individuals. So it seems I end up with something like (AX + BY + FCX + FDY), where F is the value of my partner’s preferences relative to mine. Yes?
And as you say, there’s an infinite number of Fs and my choice of action depends on which F I pick.
And we’re rejecting the idea that F is simply the strength of my preference for my partner’s satisfaction. If that were the case, there’d be no problem calculating a result… though of course no guarantee that my partner and I would calculate the same result. Yes?
If so, I agree that that coming up with a correct value for F sure does seem like an intractable, and quite likely incoherent, problem.
Going back to the original statement… “an ethical rationalist’s goals in relationship-seeking should be to seek a relationship that creates maximal utility for both parties” seems to be saying F should approximate 1. Which is arbitrary, admittedly.
Yes. If you and your partner agree- that is, A/B=C/D- then there’s no trouble. If you disagree, though, there’s no objectively correct way to determine the correct action.
Possibly, though many cases with F=1 seem like things PhilosophyTutor would find unethical. It seems more meaningful to look at A and B.