My objection lies in the second part of the drowning child parable. The part where someone geographically distant is considered identical to the child in front of me, and money is considered identical to the actions of saving. It’s some sort of physics being the same everywhere intuition being inappropriately applied. Of course distance in time, space, or inference create uncertainty. Of course uncertainty reduces expected value and possibly even brings the sign of the action into question if the expected variance is high enough.
A literal drowning child puts a limit on your commitment. Save this child, and your duty is discharged. When we apply this moral intuition to all the other issues in the world, our individual obligation suddenly becomes all-consuming.
Furthermore, a literal drowning child is an accident. It represents a drastic exception to the normal outcomes of your society. Your saving action is plugging a hole in a basically sound system. Do our moral intuitions stem from a consequentialist goal to save all lives that can be saved? Or do they stem from an obligation to maintain a healthy, caring, and more-or-less self-sufficient society?
To me, the best interpretation of the drowning child parable extended to a global level is that it gives me a sense of moral glee. Holy smokes! The mere act of donating money, or of doing direct work in a powerful cause for good, can save lives just the way that a more conventional heroic action can! How cool!
I’d import Eliezer’s concept of a “cheerful price,” but in reverse. Instead of being paid in money to cheerfully take an action I’d otherwise rather not do, I am being paid in lives saved to cheerfully give some money I’d otherwise rather not donate. A life saved for a mere $10,000? A bargain at twice the price!
Do our moral intuitions stem from a consequentialist goal to save all lives that can be saved? Or do they stem from an obligation to maintain a healthy, caring, and more-or-less self-sufficient society?
If the question is just “What’s the ultimate psychological cause of my moral intuitions in these cases?”, then 🤷.
If the question is “Are we just faking caring about saving other lives, when really we don’t care about other human beings’ welfare, autonomy, or survival at all?”, then I feel confident saying ‘Nah’.
I get a sense from this question (and from Romeo’s content) of ‘correctly noticing that EA has made some serious missteps here, but then swinging the pendulum too far in the other direction’. Or maybe it just feels to me like this is giving surprisingly wrong/incomplete pictures of most people’s motivations.
Quoth Romeo:
Of course distance in time, space, or inference create uncertainty. Of course uncertainty reduces expected value and possibly even brings the sign of the action into question if the expected variance is high enough.
For most people I suspect the demandingness is the crux, rather than the uncertainty. I think they’d resist the argument even if the local ‘save a drowning child’ intervention seemed more uncertain than the GiveWell-ish intervention. (Partly because of a ‘don’t let yourself get mugged’ instinct, partly because of the integrity/parts thing, and partly because of scope insensitivity.)
I also think there’s a big factor of ‘I just don’t care as much about people far away from me, their inner lives feel less salient to me’ and/or ‘I won’t be held similarly blameworthy if I ignore large amounts of distant suffering as if I ignore even small amounts of nearby suffering, because the people who could socially punish me are also located near me’.
Undemanding + Near: Drowning child. There’s a cost to saving the child, but because this scenario is rare, one-off, local, and not too costly, almost everyone (pace Keltham) is happy to endorse saving the child here.
Undemanding + Far: The same dilemma, except you’re (say) missing a medium-importance business call (with cost equivalent to one fancy suit) in order to give someone directions over the phone that will enable them to save a drowning child in a foreign country.
I suspect most people would endorse doing the same thing in these two cases, at least given a side-by-side comparison.
Demanding + Near: E.g., fighting in the trenches in a just war; or you’re living in WW2 Germany and have a unique opportunity to help thousands of Jews escape the country, at the risk of being caught and executed.
Demanding + Far: AMF, GiveDirectly, etc.
Here, my guess is that a lot more people will see overriding moral urgency and value in ‘Demanding + Near’ than in ‘Demanding + Far’. When bodies are flying all around you, or you are directly living through your own community experiencing an atrocity, I expect that to be parsed as a very different moral category than ‘there’s an atrocity happening in a distant country and I could donate all my time and money to reducing the death toll’.
Could you clarify what you mean by “demandingness”? Because according to my understanding the drowning child should be more demanding than donating to AMF because the situation demands that you sacrifice to rescue them, unlike AMF that does not place any specific demands on you personally. So I assume you mean something else?
The point of the original drowning child argument was to argue for ‘give basically everything you have to help people in dire need in the developing world’. So the point of the original argument was to move from
A relatively Undemanding + Near scenario: You encounter a child drowning in the real world. This is relatively undemanding because it’s a rare, once-off event that only costs you the shirt on your back plus a few minutes of your time. You aren’t risking your life, giving away all your wealth, spending your whole life working on the problem, etc.
to
A relatively Demanding + Far scenario. It doesn’t have to be AMF or GiveDirectly, but I use those as examples. (Also, obviously, you can give to those orgs without endorsing ‘give everything you have’. They’re just stand-ins here.)
Equally importantly IMO, it argues for transfer from a context where the effect of your actions is directly perceptionally obvious to one where it is unclear and filters through political structures (e.g., aid organizations and what they choose to do and to communicate; any governments they might be interacting with; any other players on the ground in the distant country) that will be hard to model accurately.
My guess is that this has a relatively small effect on most people’s moral intuitions (though maybe it should have a larger effect—I don’t think I grok the implicit concern here). I’d be curious if there’s research bearing on this, and on the other speculations I tossed out there. (Or maybe Spencer or someone can go test it.)
I think distance is a good correlate for whether insurance will pay, figuratively speaking. Not because there is literally an insurance company that will pay money, but because some fraction of people whose life has been saved, or whose child’s life has been saved, will think of themselves as owing a debt.
I do not think that’s the problem here; rather it’s just a case of focusing on the details of the example, rather than on the concept that it’s being used as an example for.
Out of curiosity, does all of the difference between the value of a child drowning in front of you and a child drowning far away come from uncertainty?
There’s also some coordination thing that’s muddled in here. Like, “everyone protect their neighbor” is more efficient than “everyone seek out the maximal marginal use of their dollar to save a life”. This doesn’t necessarily cash out—indeed, why *not* seek out the maximal marginal life-saving?
For one thing, the seeking is a cost; it can also be a long-term benefit if it “adds up”, accumulating evidence and understanding, but that’s a more specific kind of seeking (and you might even harm this project if e.g. you think you should lie to direct donations).
For another thing, you’re seriously eliding the possibility of, for example, helping to create the conditions under which malaria-ridden areas could produce their own mosquito nets, by (1) not trusting that people could take care of themselves, (2) having high time-preference for saving lives.
For a third thing, it’s treating, I think maybe inappropriately, everyone as being in a marketplace, and eliding that we (humans, minds) are in some sense (though not close to entirely) “the same agent”. So if I pay you low wages to really inefficiently save a life, maybe that was a good marginal use of my dollar, but concretely what happened is that you did a bunch of labor for little value. We might hope that eventually this process equilibriates to people paying for what they want and therefore getting it, but still, we can at least notice that it’s very far from how we would act if we were one agent with many actuators.
My objection lies in the second part of the drowning child parable. The part where someone geographically distant is considered identical to the child in front of me, and money is considered identical to the actions of saving. It’s some sort of physics being the same everywhere intuition being inappropriately applied. Of course distance in time, space, or inference create uncertainty. Of course uncertainty reduces expected value and possibly even brings the sign of the action into question if the expected variance is high enough.
A literal drowning child puts a limit on your commitment. Save this child, and your duty is discharged. When we apply this moral intuition to all the other issues in the world, our individual obligation suddenly becomes all-consuming.
Furthermore, a literal drowning child is an accident. It represents a drastic exception to the normal outcomes of your society. Your saving action is plugging a hole in a basically sound system. Do our moral intuitions stem from a consequentialist goal to save all lives that can be saved? Or do they stem from an obligation to maintain a healthy, caring, and more-or-less self-sufficient society?
To me, the best interpretation of the drowning child parable extended to a global level is that it gives me a sense of moral glee. Holy smokes! The mere act of donating money, or of doing direct work in a powerful cause for good, can save lives just the way that a more conventional heroic action can! How cool!
I’d import Eliezer’s concept of a “cheerful price,” but in reverse. Instead of being paid in money to cheerfully take an action I’d otherwise rather not do, I am being paid in lives saved to cheerfully give some money I’d otherwise rather not donate. A life saved for a mere $10,000? A bargain at twice the price!
This is a good point. I never noticed it before.
Quoth AllAmericanBreakfast:
If the question is just “What’s the ultimate psychological cause of my moral intuitions in these cases?”, then 🤷.
If the question is “Are we just faking caring about saving other lives, when really we don’t care about other human beings’ welfare, autonomy, or survival at all?”, then I feel confident saying ‘Nah’.
I get a sense from this question (and from Romeo’s content) of ‘correctly noticing that EA has made some serious missteps here, but then swinging the pendulum too far in the other direction’. Or maybe it just feels to me like this is giving surprisingly wrong/incomplete pictures of most people’s motivations.
Quoth Romeo:
For most people I suspect the demandingness is the crux, rather than the uncertainty. I think they’d resist the argument even if the local ‘save a drowning child’ intervention seemed more uncertain than the GiveWell-ish intervention. (Partly because of a ‘don’t let yourself get mugged’ instinct, partly because of the integrity/parts thing, and partly because of scope insensitivity.)
I also think there’s a big factor of ‘I just don’t care as much about people far away from me, their inner lives feel less salient to me’ and/or ‘I won’t be held similarly blameworthy if I ignore large amounts of distant suffering as if I ignore even small amounts of nearby suffering, because the people who could socially punish me are also located near me’.
We can consider a 2x2 matrix:
NearFarUndemandingDrowning ChildDrowning Child Phone Call?DemandingIn a War Zone?Against Malaria Foundation
Undemanding + Near: Drowning child. There’s a cost to saving the child, but because this scenario is rare, one-off, local, and not too costly, almost everyone (pace Keltham) is happy to endorse saving the child here.
Undemanding + Far: The same dilemma, except you’re (say) missing a medium-importance business call (with cost equivalent to one fancy suit) in order to give someone directions over the phone that will enable them to save a drowning child in a foreign country.
I suspect most people would endorse doing the same thing in these two cases, at least given a side-by-side comparison.
Demanding + Near: E.g., fighting in the trenches in a just war; or you’re living in WW2 Germany and have a unique opportunity to help thousands of Jews escape the country, at the risk of being caught and executed.
Demanding + Far: AMF, GiveDirectly, etc.
Here, my guess is that a lot more people will see overriding moral urgency and value in ‘Demanding + Near’ than in ‘Demanding + Far’. When bodies are flying all around you, or you are directly living through your own community experiencing an atrocity, I expect that to be parsed as a very different moral category than ‘there’s an atrocity happening in a distant country and I could donate all my time and money to reducing the death toll’.
I also think of the demandingness as generating an additional uncertainty term in the straussian sense.
Could you clarify what you mean by “demandingness”? Because according to my understanding the drowning child should be more demanding than donating to AMF because the situation demands that you sacrifice to rescue them, unlike AMF that does not place any specific demands on you personally. So I assume you mean something else?
The point of the original drowning child argument was to argue for ‘give basically everything you have to help people in dire need in the developing world’. So the point of the original argument was to move from
A relatively Undemanding + Near scenario: You encounter a child drowning in the real world. This is relatively undemanding because it’s a rare, once-off event that only costs you the shirt on your back plus a few minutes of your time. You aren’t risking your life, giving away all your wealth, spending your whole life working on the problem, etc.
to
A relatively Demanding + Far scenario. It doesn’t have to be AMF or GiveDirectly, but I use those as examples. (Also, obviously, you can give to those orgs without endorsing ‘give everything you have’. They’re just stand-ins here.)
Equally importantly IMO, it argues for transfer from a context where the effect of your actions is directly perceptionally obvious to one where it is unclear and filters through political structures (e.g., aid organizations and what they choose to do and to communicate; any governments they might be interacting with; any other players on the ground in the distant country) that will be hard to model accurately.
My guess is that this has a relatively small effect on most people’s moral intuitions (though maybe it should have a larger effect—I don’t think I grok the implicit concern here). I’d be curious if there’s research bearing on this, and on the other speculations I tossed out there. (Or maybe Spencer or someone can go test it.)
I have heard a number of people saying that they don’t want to give money to charity because they don’t trust the charities spend the money well.
I see. So essentially demandingness is not about how strong the demand is but about how much is being demanded?
I think distance is a good correlate for whether insurance will pay, figuratively speaking. Not because there is literally an insurance company that will pay money, but because some fraction of people whose life has been saved, or whose child’s life has been saved, will think of themselves as owing a debt.
I agree with you, but this seems to very much not be the point of this parable.
Indeed, it seems like Romeo may be letting (one) altruistic part get hammered down by his other parts.
I do not think that’s the problem here; rather it’s just a case of focusing on the details of the example, rather than on the concept that it’s being used as an example for.
You’re referring to the original Peter Singer essay, not to this one, yes?
Correct
Out of curiosity, does all of the difference between the value of a child drowning in front of you and a child drowning far away come from uncertainty?
There’s also some coordination thing that’s muddled in here. Like, “everyone protect their neighbor” is more efficient than “everyone seek out the maximal marginal use of their dollar to save a life”. This doesn’t necessarily cash out—indeed, why *not* seek out the maximal marginal life-saving?
For one thing, the seeking is a cost; it can also be a long-term benefit if it “adds up”, accumulating evidence and understanding, but that’s a more specific kind of seeking (and you might even harm this project if e.g. you think you should lie to direct donations).
For another thing, you’re seriously eliding the possibility of, for example, helping to create the conditions under which malaria-ridden areas could produce their own mosquito nets, by (1) not trusting that people could take care of themselves, (2) having high time-preference for saving lives.
For a third thing, it’s treating, I think maybe inappropriately, everyone as being in a marketplace, and eliding that we (humans, minds) are in some sense (though not close to entirely) “the same agent”. So if I pay you low wages to really inefficiently save a life, maybe that was a good marginal use of my dollar, but concretely what happened is that you did a bunch of labor for little value. We might hope that eventually this process equilibriates to people paying for what they want and therefore getting it, but still, we can at least notice that it’s very far from how we would act if we were one agent with many actuators.
In a sense, since other differences might be unknown?