One of the major problems I have with classical “greatest good for the greatest number” utilitarianism, the kind that most people think of when they hear the word, is that people act as if these are still rules handed to them from on high. When given the trolley problem, for example, people think you should save the five people rather than the one for “shut up and calculate” reasons, and that they are just supposed to count all humans exactly the same because those are “the rules”.
I do not believe that assigning agents moral weight as if you are getting these weights from some source outside yourself is a good idea. The only way to get moral weights is from your personal preferences. Do you find that you assign more moral weight to friends and family than to complete strangers? That’s perfectly fine. If someone else says they assign all humans equal weight, well, that’s their decision. But when people start telling you that your weights are assigned wrong, then that’s a sign that they still think morality comes from some outside source.
Morality is (or, at least, should be) just the calculus of maximizing personal utility. That we consider strangers to have moral weight is just a happy accident of social psychology and evolution.
I do not believe that assigning agents moral weight as if you are getting these weights from some source outside yourself is a good idea.
Suppose I get my weights from outside of me, and you get your weights from outside of you. Then it’s possible that we could coordinate and get them from the same source, and then agree and cooperate.
Suppose I get my weights from inside me, and you get yours from inside you; then we might not be able to coordinate, instead wrestling each other over the ability to flip the switch.
Suppose I get my weights from inside me, and you get yours from inside you; then we might not be able to coordinate, instead wrestling each other over the ability to flip the switch.
In practice people with different values manage to coordinate perfectly fine via trade; I agree an external source of morality would be sufficient for cooperation, but it’s not necessary (also having all humans really take an external source as the real basis for all their choices would require some pretty heavy rewriting of human nature).
But that presupposes that I value cooperation with you. I don’t think it’s possible to get moral weights from an outside source even in principle; you have to decide that the outside source in question is worth it, which implies you are weighing it against your actual, internal values.
It’s like how selfless action is impossible; if I want to save someone’s life, it’s because I value that person’s life in my own utility function. Even if I sacrifice my own life to save someone, I’m still doing it for some internal reason; I’m satisfying my own, personal values, and they happen to say that the other person’s life is worth more.
But that presupposes that I value cooperation with you. I don’t think it’s possible to get moral weights from an outside source even in principle; you have to decide that the outside source in question is worth it, which implies you are weighing it against your actual, internal values.
I think you’re mixing up levels, here. You have your internal values, by which you decide that you like being alive and doing your thing, and I have my internal values, by which I decide that I like being alive and doing my thing. Then there’s the local king, who decides that if we don’t play by his rules, his servants will imprison or kill us. You and I both look at our values and decide that it’s better to play by the king’s rules than not play by the king’s rules.
If one of those rules is “enforce my rules,” now when the two of us meet we both expect the other to be playing by the king’s rules and willing to punish us for not playing by the king’s rules. This is way better than not having any expectations about the other person.
Moral talk is basically “what are the rules that we are both playing by? What should they be?”. It would be bad if I pulled the lever to save five people, thinking that this would make me a hero, and then I get shamed or arrested for causing the death of the one person. The reasons to play by the rules at all are personal: appreciating following the rules in an internal way, appreciating other people’s appreciation of you, and fearing other people’s reprisal if you violate the rules badly enough.
If the king was a dictator and forced everyone to torture innocent people, it would still be against my morals to torture people, regardless of whether I had to do it or not. I can’t decide to adopt the king’s moral weights, no matter how much it may assuage my guilt. This is what I mean when I say it is not possible to get moral weights from an outside source. I may be playing by the king’s rules, but only because I value my life above all else, and it’s drowning out the rest of my utility function.
On a related note, is this an example of a intrapersonal utility monster? All my goals are being thrown under the bus except for one, which I value most highly.
Your example of the King who wants you to torture is extreme, and doesnt generalize … you have set up not torturing as a non-negotiable absolute imperative. A more steelmanned case would be compromising on negotiable principles at the behest of society at large.
One of the major problems I have with classical “greatest good for the greatest number” utilitarianism, the kind that most people think of when they hear the word, is that people act as if these are still rules handed to them from on high. When given the trolley problem, for example, people think you should save the five people rather than the one for “shut up and calculate” reasons, and that they are just supposed to count all humans exactly the same because those are “the rules”.
I do not believe that assigning agents moral weight as if you are getting these weights from some source outside yourself is a good idea. The only way to get moral weights is from your personal preferences. Do you find that you assign more moral weight to friends and family than to complete strangers? That’s perfectly fine. If someone else says they assign all humans equal weight, well, that’s their decision. But when people start telling you that your weights are assigned wrong, then that’s a sign that they still think morality comes from some outside source.
Morality is (or, at least, should be) just the calculus of maximizing personal utility. That we consider strangers to have moral weight is just a happy accident of social psychology and evolution.
Suppose I get my weights from outside of me, and you get your weights from outside of you. Then it’s possible that we could coordinate and get them from the same source, and then agree and cooperate.
Suppose I get my weights from inside me, and you get yours from inside you; then we might not be able to coordinate, instead wrestling each other over the ability to flip the switch.
In practice people with different values manage to coordinate perfectly fine via trade; I agree an external source of morality would be sufficient for cooperation, but it’s not necessary (also having all humans really take an external source as the real basis for all their choices would require some pretty heavy rewriting of human nature).
But that presupposes that I value cooperation with you. I don’t think it’s possible to get moral weights from an outside source even in principle; you have to decide that the outside source in question is worth it, which implies you are weighing it against your actual, internal values.
It’s like how selfless action is impossible; if I want to save someone’s life, it’s because I value that person’s life in my own utility function. Even if I sacrifice my own life to save someone, I’m still doing it for some internal reason; I’m satisfying my own, personal values, and they happen to say that the other person’s life is worth more.
I think you’re mixing up levels, here. You have your internal values, by which you decide that you like being alive and doing your thing, and I have my internal values, by which I decide that I like being alive and doing my thing. Then there’s the local king, who decides that if we don’t play by his rules, his servants will imprison or kill us. You and I both look at our values and decide that it’s better to play by the king’s rules than not play by the king’s rules.
If one of those rules is “enforce my rules,” now when the two of us meet we both expect the other to be playing by the king’s rules and willing to punish us for not playing by the king’s rules. This is way better than not having any expectations about the other person.
Moral talk is basically “what are the rules that we are both playing by? What should they be?”. It would be bad if I pulled the lever to save five people, thinking that this would make me a hero, and then I get shamed or arrested for causing the death of the one person. The reasons to play by the rules at all are personal: appreciating following the rules in an internal way, appreciating other people’s appreciation of you, and fearing other people’s reprisal if you violate the rules badly enough.
If the king was a dictator and forced everyone to torture innocent people, it would still be against my morals to torture people, regardless of whether I had to do it or not. I can’t decide to adopt the king’s moral weights, no matter how much it may assuage my guilt. This is what I mean when I say it is not possible to get moral weights from an outside source. I may be playing by the king’s rules, but only because I value my life above all else, and it’s drowning out the rest of my utility function.
On a related note, is this an example of a intrapersonal utility monster? All my goals are being thrown under the bus except for one, which I value most highly.
Your example of the King who wants you to torture is extreme, and doesnt generalize … you have set up not torturing as a non-negotiable absolute imperative. A more steelmanned case would be compromising on negotiable principles at the behest of society at large.