Suppose you are jogging somewhere in order to make a donation to a foreign charity. The number of expected lives saved from your donation is 3. On the way, you witness a young child drowning in a river. You have a choice: continue on, expecting to save 2 lives overall. Or save the child, expecting to lose 2 lives overall.
Suppose you know there are three people being held hostage across the street, who will be killed unless the ransom money is delivered in the next ten minutes. You’re running there with the money in hand; there’s no-one else who can make it in time. On the way, you witness a young child drowning in a river. Do you abandon your mission to save the child?
I claim that many (most?) people would be much more understanding if I ignored the child in my example, than if I did so in yours. Do you agree?
The only difference between the two scenarios is that the hostages are concrete, nearby and the danger immediate, while the people you’re donating to are far away in time and space and probably aren’t three specific individuals anyway. And this engages lots of well known biases—or useful heuristics, depending on your point of view.
How would one argue that it’s right to save the child in your example, and right to abandon it in mine? I think most people would (intuitively) try to deny the hypothetical: they would question how you can be so sure that your donation would save exactly three lives, and why making it later wouldn’t work, and so on. But if they accept the hypothetical that you have a clear choice between the two, then what difference can motivate them, other than the near-far or specific people vs. statistic distinctions? What other rule can be guiding ‘what is the right thing to do’? And do you accept this rule?
I agree that the differences are more or less what you say they are, and I think those differences can be enough to determine what is right and what is not. I do not think it has anything to do with being biased.
Certainly, you can assign moral weight to strangers according to their distance from you, their concreteness, and their familiarity or similarity to you. That is what many people do, and probably everyone instinctively does it to some degree. Modern utilitarians, EAers, etc. don’t pretend to be perfect; most of them just deviate a little bit from this default behavior.
One problem with this is that, in historically recent times, a very few people are sometimes placed in positions where they can (or must) decide the lives of billions. And then most people agree we would not want them to follow this rule. We don’t want the only thing stopping nuclear first strikes to be the fear of retaliation; if Reagan had had a button which would instantly wipe out all USSR citizens with no fear of revenge strikes, we would want him to not press it for moral reasons.
Another problem is that it creates moral incentives not to cooperate. If two groups are contesting a vital resource, we’d rather they share it; we don’t want them to each have moral incentives to go to war over it, because it’s morally more important to have a vital resources for yourself than it is not to kill some strangers or deprive them of it.
A related question is the precie function with which moral weight falls off with distance has to be very finely tuned. Should it fall off with distance squared, or cubed, or what? Is there any way for two friends to convince one another whose moral rule is more exactly correct?
I started to write a response to this and then deleted it because it grew to over a page and I wasn’t close to being finished. Basically you are looking at things from a utilitarian point of view and would like a description of my position in terms of a utility function. But I don’t accept that point of view, even if I understand it, and the most natural description of my way of acting isn’t a utility function at all.
(I accept that to the degree that my actions are consistent, it is mathematically possible to describe those actions with a utility function—but there is no necessary reason why that utility function would look very sensible, given that the agent is not actually using a utility function, but some other method, to make its choices.)
The simple answer (the full answer isn’t simple) to your questions is that I should do the right thing in my life, which might involve giving money to strangers, but which probably does not involve giving 50% of it to strangers, and those few people who are in positions of power should do the right thing in their lives, which definitely does not normally involving wiping out countries.
Suppose you know there are three people being held hostage across the street, who will be killed unless the ransom money is delivered in the next ten minutes. You’re running there with the money in hand; there’s no-one else who can make it in time. On the way, you witness a young child drowning in a river. Do you abandon your mission to save the child?
I claim that many (most?) people would be much more understanding if I ignored the child in my example, than if I did so in yours. Do you agree?
The only difference between the two scenarios is that the hostages are concrete, nearby and the danger immediate, while the people you’re donating to are far away in time and space and probably aren’t three specific individuals anyway. And this engages lots of well known biases—or useful heuristics, depending on your point of view.
How would one argue that it’s right to save the child in your example, and right to abandon it in mine? I think most people would (intuitively) try to deny the hypothetical: they would question how you can be so sure that your donation would save exactly three lives, and why making it later wouldn’t work, and so on. But if they accept the hypothetical that you have a clear choice between the two, then what difference can motivate them, other than the near-far or specific people vs. statistic distinctions? What other rule can be guiding ‘what is the right thing to do’? And do you accept this rule?
I agree that the differences are more or less what you say they are, and I think those differences can be enough to determine what is right and what is not. I do not think it has anything to do with being biased.
Certainly, you can assign moral weight to strangers according to their distance from you, their concreteness, and their familiarity or similarity to you. That is what many people do, and probably everyone instinctively does it to some degree. Modern utilitarians, EAers, etc. don’t pretend to be perfect; most of them just deviate a little bit from this default behavior.
One problem with this is that, in historically recent times, a very few people are sometimes placed in positions where they can (or must) decide the lives of billions. And then most people agree we would not want them to follow this rule. We don’t want the only thing stopping nuclear first strikes to be the fear of retaliation; if Reagan had had a button which would instantly wipe out all USSR citizens with no fear of revenge strikes, we would want him to not press it for moral reasons.
Another problem is that it creates moral incentives not to cooperate. If two groups are contesting a vital resource, we’d rather they share it; we don’t want them to each have moral incentives to go to war over it, because it’s morally more important to have a vital resources for yourself than it is not to kill some strangers or deprive them of it.
A related question is the precie function with which moral weight falls off with distance has to be very finely tuned. Should it fall off with distance squared, or cubed, or what? Is there any way for two friends to convince one another whose moral rule is more exactly correct?
I started to write a response to this and then deleted it because it grew to over a page and I wasn’t close to being finished. Basically you are looking at things from a utilitarian point of view and would like a description of my position in terms of a utility function. But I don’t accept that point of view, even if I understand it, and the most natural description of my way of acting isn’t a utility function at all.
(I accept that to the degree that my actions are consistent, it is mathematically possible to describe those actions with a utility function—but there is no necessary reason why that utility function would look very sensible, given that the agent is not actually using a utility function, but some other method, to make its choices.)
The simple answer (the full answer isn’t simple) to your questions is that I should do the right thing in my life, which might involve giving money to strangers, but which probably does not involve giving 50% of it to strangers, and those few people who are in positions of power should do the right thing in their lives, which definitely does not normally involving wiping out countries.