For instance, we might think that maximizing total welfare is always for the best, but then realize that we don’t actually want to maximize total welfare if the people we consider our friends would be hurt.
Well, you have to understand what such a decision would actually look like. In order for a decision to truly maximize total welfare over all people, even as it “stabs your friends in the back”, it would have to really increase total welfare, because this utility gain would have to at least cancel out the degradation of the value of friendship.
That is, if I expect my friendship with someone not to mean that they weight me higher than a random person in their utility function, friendship becomes less valuable, and an entire set of socially-beneficial activity enabled by friendship (e.g. lower cost of monitoring for cheating) contracts.
I think your hypothetical here has the same problem that presenting the true Prisoner’s Dilemma has; in the true PD, it’s hard to intuitively imagine a circumstance where utilities in the payoff matrix account for my compassion for my accomplice. Just the same, in the tradeoff your presented, it’s hard to intuitively understand what kind of social gain could outweigh general degradation of friendship.
ETA: Okay, it’s not that hard, but like with the true PD, such situations are rare: for example, if I were presented with the choice of “My twenty closest friends/loved ones die” vs. “All of humanity except me and my twenty closest die”. But even then, if e.g. my friends have children not in the set of 20, it’s still not clear that all of the twenty would prefer the second option!
But even then, if e.g. my friends have children not in the set of 20, it’s still not clear that all of the twenty would prefer the second option!
Wow, you really don’t search very hard for hypotheticals. It’s not actually very hard to come up with situations that have this sort of conflict. E.g. a general sending a specialized squad (including several friends) on an extremely risky mission that only they could carry out, if the alternatives would cause much more risk to the army as a whole. (Not an entirely fabricated situation, although that example doesn’t fit perfectly.)
Okay, fair point; I was interpreting the situation as being one in which you betray a friend for the benefit of others; in the example you gave, the sacrifice asked of them is part of the duties they signed up for and not an abrogation of friendship.
But I don’t think your example works either: it benefited Americans at the expense of Japanese. That’s not trading “friends’ utilities for higher other utilities”; its’ trading “friends’ utilities for some higher and some lower other utilities”.
Now, if you want to introduce some paperclip maximizers who value a few more paperclips to a billion human lives...
But I don’t think your example works either: it benefited Americans at the expense of Japanese. That’s not trading “friends’ utilities for higher other utilities”; its’ trading “friends’ utilities for some higher and some lower other utilities”.
When estimated by humans, utilities aren’t objective. I’m pretty sure that if you asked Col. Doolittle in those terms, he’d be of the opinion that U(US winning Pacific Theater) >> U(Japan winning Pacific Theater), taking the whole world into account; thus he probably experienced conflict between his loyalty to friends and his calculation of optimal action. (Of course he’s apt to be biased in said calculation, but that’s beside the point. There exists some possible conflict in which a similar calculation is unambiguously justified by the evidence.)
Of course he’s apt to be biased in said calculation, but that’s beside the point. There exists some possible conflict in which a similar calculation is unambiguously justified by the evidence.
Then I’m sure you can cite that instead. If it’s hard to find, well, that’s my point exactly.
I’m not sure I’m understanding properly. You talk as if my action would drastically affect society’s views of friendship. I doubt this is true for any action I could take.
Well, all my point really requires that is that it moves society in that direction. The fraction of “total elimination of friendship” that my decision causes must be weighed against the supposed net social gain (other people’s gain minus that of my friends), and it’s not at all obvious when one is greater than the other.
Plus, Eliezer_Yudkowsky’s Timeless Decision Theory assumes that your decisions do have implications for everyone else’s decisions!
Well, you have to understand what such a decision would actually look like. In order for a decision to truly maximize total welfare over all people, even as it “stabs your friends in the back”, it would have to really increase total welfare, because this utility gain would have to at least cancel out the degradation of the value of friendship.
That is, if I expect my friendship with someone not to mean that they weight me higher than a random person in their utility function, friendship becomes less valuable, and an entire set of socially-beneficial activity enabled by friendship (e.g. lower cost of monitoring for cheating) contracts.
I think your hypothetical here has the same problem that presenting the true Prisoner’s Dilemma has; in the true PD, it’s hard to intuitively imagine a circumstance where utilities in the payoff matrix account for my compassion for my accomplice. Just the same, in the tradeoff your presented, it’s hard to intuitively understand what kind of social gain could outweigh general degradation of friendship.
ETA: Okay, it’s not that hard, but like with the true PD, such situations are rare: for example, if I were presented with the choice of “My twenty closest friends/loved ones die” vs. “All of humanity except me and my twenty closest die”. But even then, if e.g. my friends have children not in the set of 20, it’s still not clear that all of the twenty would prefer the second option!
Wow, you really don’t search very hard for hypotheticals. It’s not actually very hard to come up with situations that have this sort of conflict. E.g. a general sending a specialized squad (including several friends) on an extremely risky mission that only they could carry out, if the alternatives would cause much more risk to the army as a whole. (Not an entirely fabricated situation, although that example doesn’t fit perfectly.)
Okay, fair point; I was interpreting the situation as being one in which you betray a friend for the benefit of others; in the example you gave, the sacrifice asked of them is part of the duties they signed up for and not an abrogation of friendship.
But I don’t think your example works either: it benefited Americans at the expense of Japanese. That’s not trading “friends’ utilities for higher other utilities”; its’ trading “friends’ utilities for some higher and some lower other utilities”.
Now, if you want to introduce some paperclip maximizers who value a few more paperclips to a billion human lives...
When estimated by humans, utilities aren’t objective. I’m pretty sure that if you asked Col. Doolittle in those terms, he’d be of the opinion that U(US winning Pacific Theater) >> U(Japan winning Pacific Theater), taking the whole world into account; thus he probably experienced conflict between his loyalty to friends and his calculation of optimal action. (Of course he’s apt to be biased in said calculation, but that’s beside the point. There exists some possible conflict in which a similar calculation is unambiguously justified by the evidence.)
Then I’m sure you can cite that instead. If it’s hard to find, well, that’s my point exactly.
I’m not sure I’m understanding properly. You talk as if my action would drastically affect society’s views of friendship. I doubt this is true for any action I could take.
Well, all my point really requires that is that it moves society in that direction. The fraction of “total elimination of friendship” that my decision causes must be weighed against the supposed net social gain (other people’s gain minus that of my friends), and it’s not at all obvious when one is greater than the other.
Plus, Eliezer_Yudkowsky’s Timeless Decision Theory assumes that your decisions do have implications for everyone else’s decisions!