Certainly, you can assign moral weight to strangers according to their distance from you, their concreteness, and their familiarity or similarity to you. That is what many people do, and probably everyone instinctively does it to some degree. Modern utilitarians, EAers, etc. don’t pretend to be perfect; most of them just deviate a little bit from this default behavior.
One problem with this is that, in historically recent times, a very few people are sometimes placed in positions where they can (or must) decide the lives of billions. And then most people agree we would not want them to follow this rule. We don’t want the only thing stopping nuclear first strikes to be the fear of retaliation; if Reagan had had a button which would instantly wipe out all USSR citizens with no fear of revenge strikes, we would want him to not press it for moral reasons.
Another problem is that it creates moral incentives not to cooperate. If two groups are contesting a vital resource, we’d rather they share it; we don’t want them to each have moral incentives to go to war over it, because it’s morally more important to have a vital resources for yourself than it is not to kill some strangers or deprive them of it.
A related question is the precie function with which moral weight falls off with distance has to be very finely tuned. Should it fall off with distance squared, or cubed, or what? Is there any way for two friends to convince one another whose moral rule is more exactly correct?
I started to write a response to this and then deleted it because it grew to over a page and I wasn’t close to being finished. Basically you are looking at things from a utilitarian point of view and would like a description of my position in terms of a utility function. But I don’t accept that point of view, even if I understand it, and the most natural description of my way of acting isn’t a utility function at all.
(I accept that to the degree that my actions are consistent, it is mathematically possible to describe those actions with a utility function—but there is no necessary reason why that utility function would look very sensible, given that the agent is not actually using a utility function, but some other method, to make its choices.)
The simple answer (the full answer isn’t simple) to your questions is that I should do the right thing in my life, which might involve giving money to strangers, but which probably does not involve giving 50% of it to strangers, and those few people who are in positions of power should do the right thing in their lives, which definitely does not normally involving wiping out countries.
Certainly, you can assign moral weight to strangers according to their distance from you, their concreteness, and their familiarity or similarity to you. That is what many people do, and probably everyone instinctively does it to some degree. Modern utilitarians, EAers, etc. don’t pretend to be perfect; most of them just deviate a little bit from this default behavior.
One problem with this is that, in historically recent times, a very few people are sometimes placed in positions where they can (or must) decide the lives of billions. And then most people agree we would not want them to follow this rule. We don’t want the only thing stopping nuclear first strikes to be the fear of retaliation; if Reagan had had a button which would instantly wipe out all USSR citizens with no fear of revenge strikes, we would want him to not press it for moral reasons.
Another problem is that it creates moral incentives not to cooperate. If two groups are contesting a vital resource, we’d rather they share it; we don’t want them to each have moral incentives to go to war over it, because it’s morally more important to have a vital resources for yourself than it is not to kill some strangers or deprive them of it.
A related question is the precie function with which moral weight falls off with distance has to be very finely tuned. Should it fall off with distance squared, or cubed, or what? Is there any way for two friends to convince one another whose moral rule is more exactly correct?
I started to write a response to this and then deleted it because it grew to over a page and I wasn’t close to being finished. Basically you are looking at things from a utilitarian point of view and would like a description of my position in terms of a utility function. But I don’t accept that point of view, even if I understand it, and the most natural description of my way of acting isn’t a utility function at all.
(I accept that to the degree that my actions are consistent, it is mathematically possible to describe those actions with a utility function—but there is no necessary reason why that utility function would look very sensible, given that the agent is not actually using a utility function, but some other method, to make its choices.)
The simple answer (the full answer isn’t simple) to your questions is that I should do the right thing in my life, which might involve giving money to strangers, but which probably does not involve giving 50% of it to strangers, and those few people who are in positions of power should do the right thing in their lives, which definitely does not normally involving wiping out countries.