Some reservations about Singer’s child-in-the-pond argument
Peter Singer is one of the most influential philosophers, and is a strong candidate for being the person who has helped the effective altruist community the most.
In the past, Peter Singer often argued that [the moral obligation to rush into a shallow pond to save a drowning child at the cost of ruining one’s shoes] is equivalent to [the moral obligation to give to charities that reduce extreme poverty]. For example, in this 2009 video he said:
Imagine that you’re walking across a shallow pond and you notice that a small child has fallen in, and is in danger of drowning […] Of course, you think you must rush in to save the child. Then you remember that you’re wearing your favorite, quite expensive, pair of shoes and they’ll get ruined if you rush into the pond. Is that a reason for not saving the child? I’m sure you’ll say no it isn’t, you just can’t compare the life of a child to the cost of a pair of shoes, no matter how expensive. […] But think about how that relates to your situation in the world today. There are children whose lives you can save […] Nearly 10 million children die every year from avoidable, poverty related causes. And it wouldn’t take a lot to save the lives of these children. We can do it. For the cost of a pair of shoes, perhaps, you could save the life of a child. […] There’s some luxury that you could do without. And with that money, you could give to an organization to reduce extreme poverty in the world, and save lives of children. […] I think that this is what we ought to be doing.
Since Singer first posed the analogy, new information and understanding has emerged, which cast doubt on the relevance of Singer’s analogy. Singer used a different analogy in his recent TED talk (in which he discussed the death of Wang Yue), but whether explicitly or implicitly, the “child in a pond” meme has caught on. In light of recent developments, it’s important to highlight the fact that the opportunities to donate to alleviate global in our present world are disanalogous to the opportunity in Singer’s “child in a pond” scenario.
The most expensive pair of shoes that I own costs ~$120, and I would guess that the average American doesn’t own a pair of shoes that costs more than $200. With this in mind, Singer’s analogy suggests that one can save a life of a child in the developing world for less than $200. To determine whether Peter Singer’s analogy is a good one, we need to examine the empirical data concerning the cost of saving a child’s life.
GiveWell spent five years looking for outstanding charities that alleviate poverty in the developing world. GiveWell’s current top recommended charity, Against Malaria Foundation (AMF), distributes long lasting insecticide treated nets to guard recipients against malaria. GiveWell’s explicit estimate of AMF’s cost per life saved is just under $2,300. The cost of bed nets has recently fallen, and this is expected to decrease AMF’s cost per life saved, but not by a large margin.
GiveWell Co-Executive Director Holden Karnofsky has written about how explicit expected value estimates shouldn’t be taken literally, and in particular, that explicit estimates of the value of philanthropic opportunities should be adjusted to account for one’s Bayesian prior over the effectiveness of all philanthropic opportunities. In June 2012, GiveWell senior research analyst Alexander Berger wrote(speaking for himself rather than for GiveWell)
I don’t think the expected value of a $1600 donation to AMF [an earlier cost-effectiveness estimate for AMF’s cost per life saved] is actually anywhere near one life saved. The reason for this has nothing to do with how AMF works and is more a feature of its place in the total distribution of charity cost-effectiveness. I think there are a variety of practices in cost-effectiveness estimation that push in favor of a difficult-to-estimate positive bias (e.g. using evidence from RCTs, which are generally conducted in the most promising circumstances), that the most extreme cost-effectiveness estimates are more likely to be biased, and that the benefit of a marginal contribution is almost always less than the benefit of an average contribution. All of these conspire to make me think that the estimate that GiveWell provides for the “cost-per-life saved” for AMF is not the correct number for estimating the expected value of a contribution to AMF.
In the section “Concrete factors that further reduce the expected value of donating to AMF” of my blog post Robustness of Cost-Effectiveness Estimates and Philanthropy, I listed eleven concrete factors that increase AMF’s expected cost per life saved.
The reason why saving the child drowning in a pond in Singer’s hypothetical is obviously the right thing to do is that the personal cost associated with doing so is negligible relative to the benefit to others. The cost of saving a life in the developing world by donating to AMF is at least 10x greater than the cost in Singer’s “child in a pond” analogy, and possibly much greater. This substantially weakens Singer’s argument.
I raised this point in a recent comment thread on the GiveWell blog, and Doug S. concurred, writing
Honestly, there really is a big difference to me if X is different by orders of magnitude. The U.S. federal minimum wage is currently $7.25 an hour. Payroll taxes are 7.5%, so take-home pay becomes $6.70 an hour. It takes 343 hours – two months, working full time – working a minimum wage job to earn the $2300 it takes your #1 charity to save a life. There’s a big difference between $200 and $2000, between one week of minimum wage work and two months of minimum wage work.
Holden responded:
I think Jonah and Doug are both looking for more precision than is reasonable. Robust facts about disparities in wealth – which you will also see qualitatively if you travel to the developing world – are sufficient to make the point that you have a great deal of power to help others a lot by giving up a little. If you’re looking for any sort of precise “dollar cost per quantity of good accomplished” (over and above the kind of robust comparisons I just described) such that a factor of 5-10 is crucial to how much you decide to give, I think it is – and long knowably has been – unrealistic to get such a thing. I think nearly all targets of Peter Singer’s argument have long implicitly recognized this fact. Perhaps there are some arguments for which such precision would be necessary, but if so they aren’t arguments that I see as having much traction. I don’t empathize with the view that such precision is necessary in order to make the broad argument that you ought to give generously.
What Holden’s comment misses is that there’s a big difference between the following two statements:
(1) “A rough estimate for the cost of saving a life is the cost of an expensive pair of shoes, but it could be much higher or much lower”
(2) “A rough estimate for the cost of saving a life is over 10x greater than the cost of an expensive pair of shoes, but the cost is probably higher, and possibly much higher, due to Bayesian regression.”
The problem with Singer’s “child in a pond” analogy isn’t that real world cost-effectiveness estimates aren’t precise. The problem with Singer’s “child in a pond” analogy is that there’s a strong case for the cost-effectiveness of donating to AMF being vastly lower than Singer’s analogy suggests.
Peter Singer has been very successful in getting people interested in donating to alleviate global poverty. One could argue that his “child in a pond” contributed to his success, and that continuing to use it is, for this reason, justified. Nevertheless, the analogy is problematic.
Vipul Naik wrote (paraphrased):
There is a tension between the tactically optimal approach for convincing a larger number of people to donate more, and the argument that is most grounded in empirical reality. I think that rather than minimizing the tension, it’s more courageous and epistemically admirable to openly and very explicitly admit that Singer-style (implicit or explicit) “you-can-save-a-life-for-the-price-of-a-pair-of-shoes” *if true*, would be far more compelling a reason to donate than the argument based on disparities in wealth.
See also this comment where Carl Shulman writes: “I think it’s bad news for probably mistaken estimates to spread, and then disillusion the readers or make the writers look biased. If people interested in effective philanthropy go around trumpeting likely wrong (over-optimistic) figures and don’t correct them, then the community’s credibility will fall, and bad models and epistemic practices may be strengthened.”
Singer’s argument is not the only argument for donating to alleviate poverty in the developing world. For example, in a recent blog post, Holden wrote:
To us, the strongest form of the challenge [to donate to alleviate poverty in the developing world] is not “How much should I give when $X saves a life?” but “How much should I give, knowing that I have massive wealth compared to the global poor?” Perhaps the most vivid illustration comes not from Against Malaria Foundation (our #1-rated charity) but from GiveDirectly (our #2). If you give $1000 to GiveDirectly, ~$900 will end up in the hands of people whose resources are a tiny fraction of yours. GiveDirectly’s estimate – which we believe is less sensitive to guesswork than “cost per life saved” figures – is that recipients live on ~65 cents per day, implying that such a donation could roughly double the annual consumption for a family of four, not counting any long term benefits.
As Vipul commented, this argument is much weaker than Singer’s “child in a pond” argument.
In Living High and Letting Die: Our Illusion of Innocence (pg. 135) Peter Unger gives a Singer-style analogy that can be made more faithful to present day empirical realities than Singer’s “child in a pond” analogy. The form of the argument (modified for use in the present context) is this:
Imagine that you have a car that’s worth AMF’s actual cost per life saved. You park your car on unused train tracks and get out in order to walk around. You see a child playing in a tunnel off in the distance, and see a train headed toward the tunnel. If the train proceeds, the train will kill the child. You have access to a switch that can be used to divert the train toward the unused train tracks where your car is parked. If you flip the switch, the train will demolish your car, but nobody will be killed. Do you flip the switch?
I think that most people would say that flipping the switch is the right thing to do. But I don’t think that they would say that the moral obligation is as great as the moral obligation in Singer’s “child in a pond” scenario.
Acknowledgments: Thanks to Vipul Naik, Nick Beckstead and Luke Muehlhauser for helpful feedback on an earlier version of this post.
Note: I formerly worked as a research analyst at GiveWell. All views are my own.
- Saving lives via bed nets is hard to beat for immediate impact by 21 Jun 2013 18:46 UTC; 15 points) (
- 9 Apr 2018 9:29 UTC; 14 points) 's comment on Is Rhetoric Worth Learning? by (
I don’t find this reservation very compelling. Just say you’re wearing a nice suit as well as expensive shoes, and you’re almost there.
A more meaningful difference to me is whether there’s a clear endpoint. If you ruin your suit saving the kid in the pond, well, there probably aren’t any other drowning children in sight and you can go home and feel good about yourself. But as soon as I acknowledge an obligation to help people I have never met, there is nowhere I can stop and still feel decent. It is far, far easier to live with myself if I choose never to give anything than if I save ten lives and then decide that saving an eleventh would cost me too much.
This is an excellent objection, and very similar to what I thought when I read the post. Here’s some more thoughts in the same direction.
Let’s say that after diving into the pond to save the child, and ruining all of my clothes in the process (which still don’t add up to $2000; no complete set of clothes I own adds up to that much), the very next day, I am walking across the same pond (in new clothes), and the kid’s drowning again.
So of course I save him again and am out a bunch of money/inconvenience again.
And then the next day another kid’s drowning there.
And the next day.
At this point, most of my clothes are ruined, so I’m pretty upset. But more than that: I’m angry. Who the heck is letting these kids play in the pond? Where are their parents? Shouldn’t someone put up a giant sign that says “DON’T PLAY IN THE POND, YOU IDIOT KIDS”, or a fence, or an electrified fence? Is relying on strangers walking across the pond and ruining their clothes to save these hapless kids really the best solution to this problem? Why am I on the hook for this?
At that point, I might complain to the police, say, or the city government, apprise them of the pond situation, and then go to work by a different route, avoiding the pond henceforth.
The analogy should be clear. There are children whom I can save by donating large sums of money per child to get them mosquito nets? Why am I on the hook for this? This will never end. Are there not more systematic ways of dealing with the whole situation? Some sort of mosquito-net mass production program? Eradicate the mosquitoes somehow? Stop having children?
Essentially, the intuition here is that there is someone, somewhere (possibly many someones in different places), shirking responsibility or otherwise behaving in a morally blameworthy fashion, the consequence of which behavior is kids continually being placed in life-threatening situations, which I ostensibly then have the moral obligation to save them from. Well, the end result of me having a policy of simply going ahead and fulfilling this supposed obligation is that there will always be more kids to save, forever. This does not seem like a positive result for anyone, with the possible exception of the aforementioned obligation-shirkers.
If you make this particular change to the example, then the thing you’re trading off against your new shoes and clothes isn’t “saving a child’s life” but “saving one day of a child’s life”. It’s reasonable to value that rather less (which is not to say that it’s reasonable to value it less than your shoes).
Make it another child (as you do in the next paragraph) and it’s more to the point.
But. Part of the reason why “keep saving these children, one by one, at great personal cost” might not be the right answer is that, as you point out, there are other things that are likely to be more effective and efficient, and other people better placed than you to address the problem, who will probably do so if you let them know.
None of that applies in the case of people dying of malaria in sub-Saharan Africa, so far as I can see. There aren’t obviously better approaches than malaria nets, which is why AMF is allegedly one of the most effective charities in good done per unit cost. And, while there are certainly people better placed to address the problem than you are, just telling them “hey, there are people dying of malaria” probably won’t do much to make them do it.
It’s not that I think there are more effective solutions to “save these kids from malaria” than AMF, it’s that the problem of “there are kids to be saved from malaria” is continual and open-ended. There will (it seems) always be kids to be saved from malaria, or something or other. The idea that I am morally obligated to keep doing this, forever, is what seems incorrect.
To view it from another perspective: one of the reasons I would save the drowning kid from the pond is that I want to live in a world where if something bad happens to someone, like “oh no, I am drowning in a pond”, nearby people who are able to help, do so, even at some (not entirely unreasonable) one-time expense. However, I don’t want to live in a world where bad things happening to people is just a fact of life, and other people end up having to reduce themselves to pauper status to continually fix the bad things.
Saving the child is a causal step toward the former world. Donating to AMF seems to be a causal step toward the latter world. Show me a way to fix the problem forever, and I might be interested. “Eradicate all the mosquitoes” seems like a possibility (we did it here in the U.S.). “Stop having children” might be another (though I’m not sure what would be the best way to accomplish that).
Historically, societies with high child mortality have also had high birthrates. If the demographic transition model is right, letting the child die is likely to encourage a continued high birthrate, and saving the child may lower the birthrate.
As far as I can tell, decline in birth rates is caused by availability of contraception and some other factors related to industrialization and technological growth, not by a lowered death rate per se, which by itself simply leads to population growth. Wikipedia also suggests that the demographic transition model may not apply to less-developed countries with widespread disease (AIDS, bacterial infections) such as many in Africa.
What we should be looking for is ways to discourage people from having children, at all, in places and situations where we expect that the kids are likely to need such outside “saving” as discussed in the OP.
I try in general to replace the “lives saved” metric with the “QALYs gained” metric for precisely this reason; maximizing lives saved has some very strange properties. (My go-to example is that it leads me to prefer to avoid curing a condition that causes periodic life-threatening seizures, preferring to treat each seizure as it occurs.)
You can get around that particular example by disvaluing lives lost, rather than valuing lives saved. Of course I agree that actually QALYs or something similar are a far better metric.
“None of that applies in the case of people dying of malaria in sub-Saharan Africa, so far as I can see. There aren’t obviously better approaches than malaria nets, which is why AMF is allegedly one of the most effective charities in good done per unit cost. And, while there are certainly people better placed to address the problem than you are, just telling them “hey, there are people dying of malaria” probably won’t do much to make them do it.”
Well, but then the more important question becomes “how can you convince these people to address the problem”.
Not necessarily more important. (If it turns out that actually there isn’t any realistic way to convince them to address the problem, then “ok, so what else can we do?” is a higher-value question.)
Well worth addressing, though, for sure. Getting governments and very rich people to spend more on helping the neediest parts of the world might be a very valuable activity.
How would you know there isn’t any realistic way to impact them using your resources? Donation to a think-tank seems one possible option.
Probably hard to know with much confidence. So I suppose the question might be (in so far as this makes sense) objectively unimportant but subjectively important,
It’s more accurate to think of bed nets as one fork of the malaria eradication problem. Since malaria parasites need both primary (mosquitoes) and intermediate hosts (infected humans or other vertebrates) in order to reproduce, anything that breaks transmission of the disease or kills its vectors is also going to help reduce its prevalence, and insecticide-treated netting is one of the more cost-effective ways of doing both; it’s not the only one, but it is simple and parallelizable enough to lend itself to charitable funding. Reading about previous successful eradication efforts might be helpful if you’re interested in vector control more generally.
Last I heard, the AMF and similar organizations were aiming to eliminate malaria in Africa within this decade. That sounds a little ambitious to me, but even if that goal’s not met it’s certainly not the open-ended problem you’re painting it as.
If that’s true, I think they absolutely should advertise that fact strongly, as that seems to me to be one of the most persuasive reasons to donate. “You can save a child’s life!” and “We are aiming to fix this problem forever and you can help” are very different.
You’re not “on the hook” or anything of the sort. You’re not morally obligated to save the kids, any more than you’re morally obligated to care about people you care about, or buy lunch from the place you like that’s also cheaper than the other option which you don’t like. But, if you do happen to care about saving children, then you should want to do it. If you don’t, that’s fine; it’s a conditional for a reason. Consequentialism wins the day; take the action that leads most to the world you, personally, want to see. If you really do value the kids more than your clothes though, you should save them, up until the point where you value your clothing more (say it’s your last piece), and then you stop. If you have a better solution to save the kids, then do it. But saying “it’s not my obligation” doesn’t get you to the world you most desire, probably.
Well, unless what you happen to value is discharging your obligations, in which case the whole consequentialist/deontologist divide fades away altogether.
Right, that’s the thought that motivated the “probably” at the end. Although it feels pretty strongly like motivated cognition to actually propose such an argument.
Possibly vaguely relevant
This sounds tautological. I would be reasonably sure I knew what you were saying if not for that line, which confuses me.
I make a relevant rule-consequentialist argument here.
It is tautological, but it’s something you’re ignoring in both this post and the linked reply. If you care about saving children as a part of a complex preference structure, then saving children, all other things being equal, fulfills your preferences more than not saving those children does. Thus, you want to do it. I’m trying to avoid saying you should do it, because I think you’ll read that in the traditional moral framework sense of “you must do this or you are a bad person” or something like that. In reality, there is no such thing as “being a bad person” or “being a good person”, except as individuals or society construct the concepts. Moral obligations don’t exist, period. You don’t have an obligation to save children, but if you prefer children being saved more than you prefer not paying the costs to do so then you don’t need a moral obligation to do it any more than you need a moral obligation to get you to eat lunch at (great and cheap restaurant A) instead of (expensive and bad restaurant B).
Taboo “moral obligation”. No one (important) is telling you that you’re a bad person for not saving the children, or a good person for doing so. You can’t just talk about how you refuse to adopt a rule about always saving children; I agree that would be stupid. No one asked you to do so. If you reach a point (and that can be now) where you care more about the money it would take to save a life than you do about the life you could save, don’t spend the money. Any other response will not fulfill your preferences as well (and yours are the only ones that matter). Save a few kids, if you want, but don’t sell everything to save them. And sure, if you have a better idea to save more kids with less money then do it. If you don’t, don’t complain that no one has an even better solution than the one you’re offered.
I suspect that part of the problem is that you don’t have a mental self-image as a person who cares about money more than children, and admitting that there are situations where you do makes you feel bad because of that mental image. If this is the case, and it may not be, then you should try to change your mental image.
Note: just because I used the term preferences does not equate what I’m saying to any philosophical or moral position about what we really value or anything like that. I’m using it to denote “those things that you, on reflection, really actually want”, whatever that means.
Yeah, agree with almost everything you say in the first two paragraphs. Your overall points, as I read them, are not new to me; mostly I was confused by what seemed to me a strange formulation. What I thought you were saying and what I am now pretty sure you are saying are the same thing.
Some quibbles:
Well, no comment on who’s important and who’s not, but I definitely read some posters/commenters here as saying that people who save children are good people, etc. That’s not to say I am necessarily bothered by this.
It seems mistaken to say that I (or anyone) care about money as such. Money buys things. It’s more like: I care about some things that money can buy (books, say? luxury food products?) more than I care about other things that money can buy (the lives of children in Africa, say). In any case, I try not to base my decisions on a self-image; that seems backwards.
P.S. I have to note that your comments don’t seem to address what I said in the comment I linked (but maybe you did not intend to do so). That comment does speak directly to what my preferences in fact are, and what actions of mine I think would lead to their satisfaction.
So you said that if you want to save children, you should do it (where ‘should’ shouldn’t be heard as a moral imperative or anything like that). Suppose I do want to save children, and therefore (non-morally) should save them, but I don’t. What do you call me or my behavior?
Akrasia?
That’s qualitatively the same as you wanting to work but actually ending up spending the whole afternoon on TVTropes or whatnot, or wanting to stop smoking but not doing so, as far as I can tell.
Hmm, that’s a good answer. But akratic cases seem to me to be at least a little bit different: in the case of akrasia, I want to keep working but I also clearly want to read TVTropes (otherwise, why would I be tempted?). And so it’s not as if I’m just failing to do what I want, I’m just doing what I want less instead of what I want more.
Now that I put it like that...I’m starting to wonder how akrasia is even a coherent idea. What could it mean for my desire to read TVTropes to overwhelm my desire to work except that I want to read TVTropes more? And if I want to read TVTropes more than I want to work, in what sense am I making a mistake?
And in the case of giving money to save children, you want the children to be saved but you also want to keep your money to spend it on other stuff.
It can described as different parts of you, or different time-slices of you, wanting different things: i.e., what you-yesterday wanted is different from what you-today wish you-yesterday had done: maybe you now regret spending all afternoon reading TVTropes rather than working.
Sorry, I was thinking of a crazier kind of situation. I’m thinking of a situation where you want to save the kids, and this is your all-things-considered preference. There are other things you want, but you’ve reflected and you want this more than anything else (and lets say you’re not self-deceived about this). It follows then that you should save the kids. But say you don’t, what do we call that? And I want to grant straight off that there may be some kind of impossibility in my description. Only, there probably should be no impossibility here, otherwise I’m at a loss as to how the word ‘should’ is being used.
Thanks for the link, I’ll think this over.
Well, that’s just making a trade-off. If you like strawberry ice cream, and you like chocolate ice cream, but you can’t afford to eat both, and you like chocolate ice cream more than strawberry ice cream, you won’t eat strawberry ice cream even though you like it.
So what would you call it if, in the above scenario, I ate some strawberry ice-cream? Assume that my desires are consistant over time, and that my desiring-parts have been reconciled without contradiction, i.e. that this is not a case of akrasia. Am I describing something impossible? Or am I just behaving irrationally?
Yes, if all the assumptions you made hold (also, no declining marginal utility for any ice cream flavour, no preference for variety for variety’s sake, and similar), then I would call eating strawberry ice-cream irrational.
(How likely these assumptions are to be a reasonable approximation to a scenario in real life, that’s another story; for example, people get bored when they always do the same things.)
Okay, thanks. So the ‘should’ of ‘you should save the children (if you want to)’ is a ‘should’ of rationality. Now do I have any reason at all to be rational in this way, or do I just have reason to get the thing I want (i.e. by reason of wanting it).
I mean that if I want X, and this is a reason to get X, do I have another reason to get X, namely that to do so would be rational and to fail to do so would be irrational?
You appear to be failing the twelfth virtue. Rationality is that which leads you to systematically get what you want, not some additional thing you might want in itself.
Hmm, so this seems like a problematic thing to tell someone: if I listen to you, then I’m going to be changing my mind about an object level question (“do we have reasons to be rational?”) because taking a certain position on that question violates a ‘virtue of rationality’. So if I do heed your warning, I fail in the very same way. If I don’t, then I’m stuck in my original failure.
But fair enough, I can’t think of a way to defend the idea of having reasons (specifically) to be rational at the moment.
I—I don’t know what to answer at this point—Do you have any idea how you came to care about being rational in the first place?
Would you rather be the one who did what you think of as rational, or the one who is currently smiling from on top of a giant heap of utility? (Too bad that post must use such a potentially controversial example...)
I don’t know for sure that I do care; I got started on this line of questioning by asking a moral nihilist (if that’s accurate) what they meant by ‘should’ in the claim that if you want to save kids, you should save them. Turns out, the consequent of that sentence is pleonastic with the antecedent.
I’d, probably like you, raise doubts as to what the difference could be between being rational on a given occasion, and getting the highest expected return. On the other hand, I don’t entirely trust my preferences, and the best way to represent the gap between what I want and what I should want seems to be by using words like ‘rationality’, ‘truth’ and maybe ‘goodness’. If you asked me to choose between the morally right thing, and the thing that maximises my own standard of moral value, I’d unhesitatingly go for the former.
So I agree that there’s some absurdity in distinguishing in some particular case between rationality and a particular choice that maximises expected (objective) value. It may be wrong to conclude from this that we can eschew mention of rationality in our actual decision making though: the home of that term may be as a goal or aim, rather than as something standing along-side a particular decision. Once the rational decision has been arrived at, it’s identical with ‘rationality’. Until then, rationality is the ideal that guides you there. Something like that.
Relevant parable: The Upstream Story.
Does GiveWell take “acting upstream” into account in its assessment of charity effectiveness?
The link is just a google search which doesn’t give an obvious source for a parable.
Some versions of the parable
There’s something to these concerns (see the first and third bullet points here), but I believe that the broad picture is that if donors to AMF didn’t step in then the children wouldn’t be covered by mosquito nets. That said, I think that your concerns do reduce AMF’s expected value somewhat.
Living with this constant moral pressure is unlikely to make you most effective. A better alternative is to budget your money in advance, and give yourself a modest amount that you are free to use as you please. Jeff Kaufman’s Keeping Choices Donation Neutral argues for an approach along these lines. If I remember correctly, Toby Ord makes a similar point in an early unpublished essay.
I don’t own a suit anywhere near that expensive, and I don’t think that most people in the developed world do either. Do you?
Because of diminishing marginal utility, the more you donate, the greater the personal cost becomes. The personal cost of a pair of shoes is a lot higher when you can’t afford to replace it than it is for most people in the developed world at the margin. So the common sense intuition in Singer’s hypothetical starts to break down progressively more as you donate more
This relates to my post, which is about the personal cost per life saved being a lot higher than in Singer’s hypothetical.
No, I don’t own a suit at all, but there are many other possible examples. Perhaps instead I am wearing a watch that my mother gave to me before she died and it has great sentimental value, or perhaps I have some document, hidden away in an inner pocket, that would be expensive to replace or whose destruction risks getting me fired. The exact details don’t seem that important. I don’t happen to have either of these items, any more than I have shoes worth over $20, but it is not too hard to imagine.
It’s true that, by the point I have made my own life as unpleasant or unsafe as that of the people I am trying to help, diminishing marginal utility means I can definitely stop without guilt. Realistically, I am not going to go that far. I am not a saint, and if I accept the obligation but stop giving earlier I will feel guilty and hypocritical and awful about doing so.
I gave the Unger example at the end of my post.
I agree with Pablo.
My post is relevant to triaging with respect to different altruistic efforts.
Alternatively, one could give up the obligation.
Which is what I do. Singer has a point, and argues it well; he makes it live. And yet I find myself unmoved, and answer Singer’s modus ponens with modus tollens. I make few charitable donations, and those I have made have been to things I fortuitously had some personal connection with.
This is not a recommendation to anyone else to do the same, just a statement that it is possible to refuse the chalice. I will not make of myself a slave to an ethical system. Look where it got George Price. Look where it fictionally gets Superman.
It (in that fiction) gets him enabling the transition from our present (frankly rather rubbish) world to a glorious future of peace and plenty for all. Not so bad, if you find fictional evidence compelling.
How Superman is treated in that strip is how we treat our machines. We run an electricity generator for years, stopping it only for the minimal time necessary for maintenance, and when it is worn out or obsolete, take it apart for scrap. Of course it is fine to treat a (non-sentient) machine like that. That is what we make the machines for. But if reasoning leads to the conclusion that we should treat ourselves like that, then I conclude that the reasoning is broken, even if I don’t know where it went wrong.
You may well be right about the real world. But in the fictional world of that SMBC comic, it seems to me that (miserable Superman + billions of people living in peace and prosperity) is plausibly an outcome that even Superman might prefer to (happy Superman + billions of people suffering war, poverty, disease, etc.).
In other words, I don’t think your fictional example is good support for your thesis. Which is too bad, because (like much else at SMBC) it’s a funny and thought-provoking comic.
Happy Superman + billions of people living in peace and prosperity is better than both. Some hypotheticals should be fought.
In the hypothetical world, Superman brings the whole planet to properity and then… he has a problem to find a job, and then he ends up working at the museum.
Why exactly is the person who saved the whole planet required to work? Did the humanity meanwhile evolve beyond the use of “thank you”? How about just asking some volunteers to donate 0.1% of their monthly income to Superman? If just one person in a few thousands agrees, Superman can retire happily.
The problem with the comix story is not just the extreme altruism, but that humanity appears unable to cooperate on Prisonners’ Dilemma with the Superman. (I am not saying that’s necessarily an incorrect description of the humanity. Just a sad one.)
I agree that some hypotheticals should be fought. But it seems to me that you’re objecting to the basic premise of the strip and also trying to use it as fictional evidence.
In the fictional world depicted there, how do you get to happy Superman + happy billions?
In our actual world, how do you get to (if I’m understanding correctly the analogy you want to draw) comfortable first-worlders not needing to sacrifice anything + less malaria, starvation, etc., in the poorer parts of the world?
(From the other things you’ve said in this thread it seems like you’re actually happy to get to comfortable first-worlders not needing to sacrifice anything + starvation and misery in the developing world. Fair enough; your values are what they are and I’m not going to try to change them. But then what does the hypothetical outcome (happy Superman + happy billions) have to do with anything?)
I am not using the strip as evidence of anything. The strip is just an illustration of a certain imaginary situation, and implicitly poses the question, is it a good one, or a bad one? An answer to which must consider what alternatives are on offer. The strip itself presents Superman’s original behaviour, and his revised mission. But while reality has limits, hard choices, and problems without attainable solutions, fiction does not.
If Superman can fight crime retail, and then fight poverty wholesale, why should he not instead create the means of fighting poverty wholesale? Well, in canon, because he is not known for his brains. All he can really do is hit things very hard. No matter, after the “This began to wear on the hero” frame, introduce some genius superhero to point this out to Superman. The genius can do the inventing while Superman helps with the grunt work of building it, and humanity gets muon fusion engines decades earlier. My first fanfic.
In the same way as the author: by imagining it. The question is, why do you choose to imagine only the two scenarios in the strip, and reject the legitimacy of imagining the third?
Not happy, but I’m not willing to level the peaks of civilisation to fill in the troughs.
It’s a result I think we would prefer to either of the others. In the real world, the question is how to get there. Distributing anti-malarial nets is all very well, but as SaidAchmiz has been saying, there needs to also be a larger strategy.
Then at least one of us is confused.
If you’re just pointing to the strip as an illustration of something bad, then I disagree about its badness (even from hypothetical-Superman’s vantage point): the strip shows Superman putting up with something pretty bad, but achieving something good for it, and I think even hypothetical-Superman would agree that the overall outcome is a good one.
Once you start arguing about what alternatives there might have been within the fiction, and saying “while reality has limits … fiction does not”, well, it seems like you’re saying “It’s bad to ask the fortunate few to sacrifice their interests for the sake of the miserable many, and we can see that because in my reimagining of the fictional world of this comic Superman does this but—so I decree—doesn’t need to”, and I don’t see what you’re gaining by appealing to the comic.
You’re welcome to imagine anything you like. I just don’t see the point of saying “So-and-so is bad; see, here’s an imaginary situation a bit like so-and-so, in which I’ve decided what’s possible and what isn’t, and it turns out to be a bad situation”.
Well, supposedly AMF thinks the nets are part of a larger strategy, and IIRC the Gates Foundation is trying to wipe out malaria. But, in any case, I don’t see how to get from “there should be a larger strategy” to “it’s OK for me not to do anything concrete”. Of course it might be OK for you not to do anything concrete, but what I don’t see is why the fact that there ought to be a larger strategy is any support for not doing anything concrete.
If I can imagine being at a switch deciding whether a train will kill five people or one, I can also imagine everyone getting off the train and the train derailing where it doesn’t kill anybody. But that would defeat the whole point of imagining the train in the first place.
In the original comics, Superman invented things that were far ahead of even modern technology; including a series of robot duplicates that were visually indistinguishable from himself (not as powerful, of course, but he occasionally dressed one of them up as Clark Kent in order to maintain his disguise). In fact, super-intelligence was supposed to be one of his powers.
Exactly why he never produced a range of android butlers, or otherwise advanced technology, is a mystery to me. The only possible reason that I can think of is that the authors wanted to keep the world’s visible technology levels more-or-less familiar to their readers.
He is certainly always able to think at the same speed he can do everything else. eg. Clark can write a Daily Prophet article in seconds, leaving the keyboard smoking. Even with only an IQ of, say 130 he should be comfortably ahead of any mere human for the purpose of achieving any particular intellectual task. Spending 10,000 subjective hours on something does wonders for achieving expert performance.
IIRC the ten thousand hours thing was ten thousand hours of tutored practice at a level appropriate to the learner. I can see Clark running into the limitations of other people’s performance rather than his own as a bottleneck.
The important factor is that it is deliberate practice. Tutors are useful but not as necessary during the practice (this obviously varies depending on the degree and kind of feedback required).
In particular where the information is not yet contained in all the textbooks and internet resources currently in existence his learning will be much slower. He’ll have to invent the science (or engineering) himself as he goes.
Daily Planet. Not perhaps the best name for a newspaper, as it appears to hint at Clark’s otherworldly origins...
Anyhow, there is a limit to the speed at which even Superman can type; that limit being the keyboard. Your average keyboard is more than fast enough to keep up with a human typist, but not infinitely fast...
Assuming that the limit is in the PS/2 protocol (and not in the keyboard hardware—Clark may have quietly replaced the keyboard on his desktop with a high-speed variant that he’d built himself, but it still needs to talk to the computer using a known protocol); assuming that the keyboard’s clock signal runs at 16.7kHz (at the top end of what the protocol allows) and continually outputs keypresses at 33 bits per key (11 bits per scancode; each key transmits one scancode when pressed, and two scancodes when released), Clark can type at a maximum of 506 characters per second; assuming an average of five characters plus a space per word, that works out to 84 words per second at most. A thousand-word article would therefore take close to 12 seconds to type up. Note that this is before dealing with punctuation or capital letters (the shift key also sends keycodes); moreover, double letters (like the cc in ‘accept’) will slow things down further; it’ll take some slight time for the keyboard to register that the key is no longer being pressed, and Clark has to wait that long before hitting it again. (That actually suggests a test for a superpowered reporter; keep an eye open for a reporter whose articles avoid double letters).
Clark could certainly work faster than that if he were, say, engraving on a stone tablet, or using pencil and paper (I’m not sure about pens, the ink needs a little time to flow to the nib). Pencil and paper would be limited by how fast the pencil can move across the paper without igniting the paper...
My recently primed munchkin instinct can’t help but notice that the analysis given doesn’t remotely approach the limits specified here. Specifically, it tacitly assumes that Clark uses only the stock standard software that everyone else uses. In fact, it even assumes that Clark doesn’t use even the most rudimentary macro or autocomplete features built in to standard wordpressors!
Assuming that at some point in his life Clark spent several minutes coding (at the limits you calculate) in anticipation of at some point in the future wishing to type fast all subsequent text input via the PS/2 protocol could occur a couple of orders of magnitude faster. Optimisations would include:
Abandon the preconception that pressing the key with the “A” painted on it puts the letter ‘a’ in the text, or any of the other keys for that matter—especially the ones that aren’t so common! Every key press is log2(number of keys) bits of information. Use all of it.
A key_press uses 33 bits of bandwidth total but key_press isn’t a discrete operation. 11 bits are used for key_down and 22 for key_up but these don’t need to follow each other directly (for example see conventional usage of shift, control and alt). As far as the PS/2 protocol is concerned key_up supplies another log2(number of keys) bits of information (for the cost of 22 bits of bandwidth).
Given that Clark constructed his own hardware he could easily make use of the full 2*log2(number of keys) bits of information per 33 bits of information by making his keyboard send only a key_down on the first keypress and a key_up on the second keypress (alternating).
If Clark is using a standard keyboard then he can still send more information via key_up but is now limited by fingers. Since he has only 10 fingers, before every keydown (after the first 10) he can send one or more key_ups. Which finger(s) he choses to lift up is influenced by the proximity of the keys to each other. Optimal use of this additional information would use a custom weighted “twister) protocol” that extracts every bit of information available in the choice “left index finger T” instead of “right pointer T” when both were bio-mechanically plausible options. For this reason, if Clark is using a standard keyboard I recommend he use the smallest layout possible. A laptop’s keys being cramped is a feature!
Human languages (like English) are grossly inefficient in terms of symbol use. Shannon (of shannon entropy) fame) measured the entropy of English text at between 1 and 1.5 bits per letter even when using mere human subjects guessing what the next letter would be. Some letters are used way too much, simple combinations of letters like “atbyl” have no meaning, some words combinations are more likely than others andIcanreadthiswithoutdifficulty. If bandwidth rather than processing power is the limit compression is called for. I estimate that Clark’s Text Over PS/2 Protocol ought to be at least as efficient as Shannon’s “subjects can guess what is coming next” findings for typical text while remaining lossless (albeit less efficient) even under unusual input.
Since Clark wants to maintain a secret identity his keyboard must be required to operate normally except when he is typing fast. This is easy enough to accomplish via any one of:
An unmarked button that requires superhuman strength to press.
A keyboard combination (F12 D u _ @ F3 W * & etc) that will not occur randomly but still takes negligible time to enter.
The software just starts interpreting the input differently once a sufficient number of keys have been input in rapid succession. (This seems preferable.)
That wouldn’t help; he can’t then choose to send “key_up (a)” followed by “key_up (a)”, there has to be a “key_down (a)” inbetween.
He could, of course, simply elect to have his personal keyboard ignore key_ups and send only the shorter key_down codes, meaning that he has only 11 bits per character. Aside from that minor quibble, though, you make several excellent points.
If he’s writing his own keyboard driver, he can take this even further, and have his keyboard (when in speed mode) deliver a different set of scancodes; he can pick out 32 keys and have each of them deliver a different 5-byte code (hitting any key outside of those 32 automatically turns off speed mode). In this manner, his encoding efficiency is limited only by processing power (his system will have to decrypt the input stream pretty quickly) and clock rate (assuming he doesn’t mess with the desktop hardware, he’d probably still have to stick to 16.7kHz). Since modern processors run in the GHz range, I expect that the keyboard clock rate will be the limiting factor.
Unless he starts messing with his desktop’s hardware, of course.
You seem to have read the text incorrectly. The passage you quote explicitly mentions sending both key_down and key_up and even uses the word ‘alternating’. ie. All that is changing is a relatively minor mechanical detail of what kind of button each key behaves as. If necessary, imagine that each key behaves something like the button on a retractable ball point pen. First press down. Second press up. All that is done is removing the need to actually hold each key down with a finger while they are in the down state.
I notice that I am confused. You say that I have read the original text incorrectly, and then you post a clarification that exactly matches my original interpretation of the text.
I see two possible causes for this. Either I have misunderstood you (as you state) and, moreover, continue to misunderstand you in the same way; or you have misunderstood me.
Therefore, I shall re-state my point in more detail, in the hope of clearing this up.
Consider the ‘a’ key. This point applies to all keys equally, of course, but for simplicity let us consider a single arbitrary key.
Under your proposed keyboard, the following is true.
The first time Clark presses ‘a’, the keyboard sends key_down (a). This is 11 bits, encoding the message ‘key “a” has been pressed’
The second time Clark presses ‘a’, the keyboard sends key_up (a). This is 22 bits, encoding the message ‘key “a” has been pressed’
The third time Clark presses ‘a’, the keyboard sends key_down (a). This is 11 bits, encoding the message ‘key “a” has been pressed’
The fourth time Clark presses ‘a’, the keyboard sends key_up (a). This is 22 bits, encoding the message ‘key “a” has been pressed’
I therefore note that replacing every key_up with a key_down saves a further 11 bits per 2 keystrokes, on average, for no loss of information.
Both is also a possibility (and from my re-analysis seems to be the most likely.)
Allow me to abandon inferences about interpretations and just respond to some words.
This claim is false. It would help a lot! It improves bandwidth by a factor of a little under two over not the alternative making optimal use of the key_up signal as well as the key_downs. As for how much improvement the keyboard change is over merely using all 10 fingers optimally… the math gets complicated and is dependent on things like finger length.
I agree. If just abandoning key_up scancodes altogether is permitted then obviously do so! I used them because from what little I understand of the PS/2-keyboard protocol from reading CCC’s introduction then a little additional research the key_ups are not optional and decided that leaving them out would violate CCC’s assumptions. I was incidentally rather shocked at the way the protocol worked. 22 bits for a key_up? Why? That’s a terrible way to do it! (Charity suggests to me that bandwidth must not have been an efficient target of optimisation resources at the design time.)
Yes, you are right. On re-reading and looking over this again, I see that misread you there; for some reason (even after I knew that misreading was likely) I read that as 2log2(number of keys) bits of information per keypress* instead of per 33 bits of information.
Ah, right. My apologies; I’d though that the idea of drawing log2(number of keys) bits of information per keypress already implied a rejection of the assumption that the PS/2 protocol would be used.
Well, to be fair, the PS/2 protocol is only intended to be able to keep up with normal human typing speed. The bandwidth limit sits at over 80 characters per second, and I don’t think that anyone outside the realm of fiction is likely to ever type at that speed, even just mashing keys randomly.
.
Characters per second and words per minute don’t match; wpm is typically calculated with 5 characters per word, so 80 cps would correspond to 960 wpm.
Samantha Carter used an entire computer lab when she was supercharged. Her limit was the key buffer if I recall. Depending on how computers have evolved the limiting factor could be the mechanics these days. If so, then it may be efficient to use several keyboards simultaneously.
Especially if he uses his (laser) eyes. Depending on his power level he could possibly write faster than the speed of light. If my past research is correct superman is the most powerful when inside a sun (a blue star is best but ours would be fine). So he could perhaps write the most quickly by positioning himself at the surface and the sun and engraving on the surface of say, Mars, or perhaps a moon of Jupiter. The limit of is output then be either the precision of his eyesight, how fast he can control the muscles that move said eyes or, if those capabilities are sufficiently excessive, how fast he can think.
Or he could simply bring a stone tablet with him, and make sure to remain far enough from the Sun so as not to melt his writing materials.
Using a distant tablet allows him to write faster than the speed of light without abandoning special relativity just by wiggling his eyes a little. If we assume he is one of the later incarnations who in fact can fly faster than the speed of light (and so has superluminal text output even while close to the writing material) then the advantage that remains for positioning himself at the surface of the sun is that his writing speed is limited by his laser energy output. His eyes are reputedly the most energy draining of his powers and so the amount of rock that he can burn away with his eyes is limited by his power input. Positioning himself at the surface of the sun gives him a couple of orders of magnitude more rock burning potential per second.
Yyyyyes. I was thinking that at that distance, Mars subtends such a small angle that very large letters are extremely likely, and space is therefore limited; but he could easily take a tablet and leave it in a near-solar orbit on his way to the Sun.
Of course, it’ll take him a bit of time to get to the Sun; he’ll probably need to have rather a lot to write to make up for the travel time.
… and that’s what happens when you f* around with “key_down”.
I dunno, how fast can a chisel carve stone before getting blunt, shattering the tablet, or something?
That’s an interesting question. Assuming he doesn’t use his own fingernails or his eyes and doesn’t have access to materials from his own fictional universe what sort of chisel would he use? The best I know of is Tungsten Carbide. Fortunately if extensive use blunts the chisel he can just sharpen it again with his fingers or eyes. He would of course also cool it down after every sentence or two by breathing on it so that friction doesn’t raise the temperature above 500°C where oxidisation starts. Or he could do his writing in a vacuum where he would only have to be concerned about the 2870°C melting point.
I’m not sure about limits when it comes to shattering the tablet (where the tablet could be, say, uluru). With a little practice, unbounded dexterity to rely on and no need to use something so crude as a hammer to apply force superman could get very close to the limits of the amount of shock the ‘tablet’ could absorb. While I have no formal training in superhuman engraving best practice I suspect the optimal technique would more closely resemble “extremely fast scratching” than “chiselling” per se. It seems highly probable that the limit that the rock could handle would be far faster than that of the PS/2 protocol. If it is necessary to reduce the concentrated stress on the rock superman can even fly back and forth like a dot matrix printer scratching a small parts of letters each time in the least damaging configuration.
This looks like a job for Randall!
I love his “What If?” even more than his cartoons. Yesterday I was wondering if he could tell me what would happen if all of the electrons in my body instantly vanished. Specifically how big the explosion would be but also whether a bunch of free protons and nuclei at that energy level would do anything exciting.
In your body there are about 0.55 electrons for each nucleon (from this and approximating Z/A as 1 for hydrogen and 0.5 for anything else); i.e., about 3.3e26 electrons per kilogram of matter; that is, their electric charge is about −5.3e7 coulombs per kilogram, and the electric charge of your body if the electrons vanished would be 5.3e7 C/kg. The electrostatic energy is then kQ^2/(4*pi*ε0*r), where r is your “size” and k is some factor roughly of order 1 depending on your “shape” (e.g. 3⁄5 for an uniform ball of radius r). That’d be in the ballpark of 1e30 joules, or 1e14 megatons of TNT: half a dozen orders of magnitude more than the Chicxulub Crater, but about fourteen orders of magnitude less than a supernova.
So from the sounds of it extinction of complex life on earth but nowhere near enough to destroy the planet.
Prob’ly something like this.
Ahh, good point. That seems about right.
After estimating the total energy and thence the energy per particle, it looks like the average particle would have UHECR-like energy, so they would each generate an extensive air shower, “spreading” the energy over larger volumes than it otherwise would. (But when you have so many showers superimposed to each other, I’m not sure the total effect would be much different from each particle interacting locally.)
What about a laser pen, built to withstand some substantial G-forces, writing on to a light-sensitive material, which is then photographed with a high speed camera?
The guy who has laser eyes is going to use a laser pen built especially to accommodate his super-special needs? I suppose he could do that. Then he could choose not to use his flight power but use his super strength to power an awesome bike-helicopter.
Like this one?
Personally, I find there’s a difference between picking a percentage of my income to give away and letting homeless people move into my house. I would not do the latter.
For people who enjoy giving, there are ways to avoid or minimize these sorts of guilty feelings. For example, some religious folk (perhaps unwittingly) use tithing as a sort of a schelling fence to prevent themselves from feeling bad about not giving more.
I don’t think I’ve ever worn anywhere near $2300′s worth of clothes at the same time. In fact, I’m not even sure I’ve ever worn $2300′s worth of anything at all (not counting money in the debit card in my wallet) in the past few years—my netbook cost about €300 when I bought it several years ago.
Fair enough, I haven’t either and I definitely didn’t choose the best example there, but my reply to JonahSinick downthread addresses this:
Yes, I could imagine some weird situation where I can dive into a pond but only by destroying $2300′s worth of value in the process—but such a situation would be so far removed from the situations I usually deal with in my daily life that I’m not at all confident about whether I would feel obligated to save the child.
(Yes, I know in principle I could say this about any thought experiment, but...)
I was going to write a post describing why I didn’t find your argument compelling, but then I realized that I would find it perfectly compelling if the estimate for $/life saved had gone up to say, $10 million. So apparently my true rejection of your argument isn’t what I was going to write—it’s that I just don’t find the difference between $200 and $2000 to be that significant.
Even if you would save the child at a price of $2000, it’s still important to have a representative hypothetical in mind rather than using Singer’s child in a pond scenario, in order to be well calibrated with respect to the value of donating to alleviate poverty relative to other altruistic activities.
But Jonah is not merely saying that some sufficiently big difference between Singer’s estimates and the most credible estimates would affect his argument. He is making the stronger claim that the actual difference is big enough to make the argument far less credible.
From Ben Kuhn:
I recently bought a second monitor. If I were carrying it on my back and saw a drowning child, I would jump into the pond, save the child, and fry the monitor. But then I would go buy another monitor, because I realized that if I’m switching back and forth between programming, documentation, and other windows all the time, the ones in the background consume nagging bits of attention and I get a lot worse at doing high-value tasks like programming. I calculate that given my hourly wage, if my second monitor makes me even 1% better at programming because I can keep track of more things, it will pay for itself in less than a year, so it’s a definite win.
Of course, there’s the question, “What if you see another child the next day? Do you keep jumping in and out of ponds?” My answer is, “No, Ben should spend his time programming, or finishing his college degree, or figuring out better ways to help people, because any of these will allow him to do more good than as an aquatic aid worker.”
Something I noticed when a friend told me about this (some terms have been altered):
Suppose there are a hundred ponds, with ten children each drowning, ALL THE TIME. Wearing a clean suit will earn you enough money to save more of them by hiring people using your large paycheck (I shall assume this suit is good enough to get you a decent job) to fish children out of ponds. In the mean time, you’d ALSO be living a comfortable life, which will further allow you to buy job-getting suits for saved children and divers, thereby increasing the number of people that will impact the situation. You would run out of drowning children pretty quick, then, and even supposing you never do, will be able to dive in to save any that DO fall in with no fears about your suit (since you can just buy another one later).
We do not live in a world of one pond and one child. I suspect we live in a world with considerably more ponds and more children than even the one above. Currently, the world we live in is full of divers (if only in potential, since people are willing to do ANYTHING for money these days), but may need some more suit-wearing investors into charity. Therefore, keep walking, get a decent job, THEN come back to the ponds with a team of divers.
PS: Would be interested to know how much money would be needed to solve the WHOLE World Hunger problem, WHOLE Poverty Problem, and so on. I suspect it will help determine just how many people are needed to fish EVERY child out of EVERY pond, and thereby show what the proportion between divers and suit-wearing investors should be. Before then, I’ll keep on working on getting that suit (or them blue suede shoes), and dive only when I can afford to.
I wrote a blog post arguing against focusing on earning to give to the exclusion of pursuing a career with direct altruistic impact.
Thank you. I had no idea you posted that! Does cast some light on what was once unclear...
The issue is in HOW one does something as much as WHAT one does, it would seem—I am a personal care provider (and volunteer) as well as an organizer for conventions, so I do understand where you are going with this. I am both working to improve the world in some small way and to get money so I can later give people money when I am wealthy, and I did not even consider my own approach (personal as it is) until your comment made me realize how limited (and un-diverse) it is to exclude one method in favor of another.
40 years ago, when Singer first made this argument, shoes were more expensive. Also, his shoes, today, are more expensive than your shoes, today. But he didn’t look into actual numbers until quite recently.
I guess that more than that, saving children was cheaper, because all the low-hanging fruit has been collected. (ISTR reading something about that in a comment on Slate Star Codex or Overcoming Bias sometime in the last few months, but I can’t seem to find it now.)
I am now attempting to estimate the expected number of small children that will drown as a result of Peter Singer’s argument.
Ha! It doesn’t actually follow from the negation of the conclusion of Singer’s argument that you’re not obligated to save the drowning child, though: the antecedent of that conclusion is a conjunction, of which ‘I am obligated to save the drowning child’ is one of the conjuncts.
ETA:...I now see that what I’m saying is that the quote in the parent is wrong: he doesn’t actually argue that these two are equivalent. You also need to accept (maybe in addition to others) the premise that if you’re obligated to save the drowning child, you’re obligated to save a child who isn’t in immediate danger in your immediate presence.
I see the child-in-a-pond scenario as an intuition pump. It takes a case where people naturally tend to care about others and feel an inclination/urge/motivation to help (even at a cost to themselves) and connects it to global poverty charity to try to induce the same urge to help others in the context of global poverty. Two of the main changes that it makes in reframing the problem are:
It takes a problem that’s far and makes it up-close and personal, with a drowning child right there in front of you.
It takes the marginal benefit that you can provide by donating one life’s worth of money ($3k or so) and treats that as the entire problem. (When reality is more like: in expectation 1,184,506 people will die of malaria this year if you don’t help, and for each $1k that you give that number drops by 0.3 expected deaths.)
The second change is important because there are several related heuristics / mental models / lines of reasoning that influence how much of an urge to help a person feels. Solving a problem feels more satisfying than slightly reducing the extent of a problem, there is a stronger urge to help when you are the only one who can help vs. when lots of people can help and the question arises of whether each person is doing their share, and a yes-or-no question about whether to help by a moderate amount feels more motivating than a boundless demand for all the help you can give (where any bit of help could lead one down a slippery slope).
If the framing from the child-in-a-pond scenario sticks, then it can not only make them more likely to give, it can also make them feel better about giving by replacing those thought patterns with one where giving is tied to a naturally arising positive motivation to help (rather than some external obligation) and is framed in a way that encourages you to feel good about the help that you do provide (instead of feeling overwhelmed about the scope of the problem, etc.).
I agree with these points, and with Holden’s view that Singer’s child-in-a-pond argument is generally underappreciated rather than overappreciated.
The part that I was highlighting as misleading is the implicitly claimed effect size. People who aspire to do the most good may be connotatively misled by into thinking that alleviating global poverty is a more promising target for optimal philanthropy than it actually is.
Carl Shulman also presented some arguments against the lake drowning thought experiment here.
Thanks Kaj.
(Thank you for providing positive reinforcement!)
Even though people’s intuitions do lead them to believe it is morally necessary for one to save the hypothetical drowning child, in that particular scenario, I wager that there are situations in which people’s intuitions would lead to other conclusions. One relevant hypothetical scenario is one in which one is amidst a group of people who also are observing the drowning child, and who are better able to bear the economic hardship of losing a pair of dress shoes (I know that the phrase “economic hardship” sounds rather callous in this scenario, but I cannot think of a better phrase off the top of my head.) Hell, perhaps some of them own thousands of pairs, while you own only one pair.
I guess what I am trying to say is that I have a pet theory that people’s objections to Singer’s scenario, whether they know it or not, are largely game-theoretical. In light of this, I see debates over the the precise cost of a child-saving as being, not irrelevant, but at have little to do with a much more important objection to Singer’s argument.
Really? At least as a matter of my intuitions I’d say the obligation is no different. You might be able to argue that if you have a car worth twice AMF’s cost per life saved, you’d save more lives letting the kid get turned into goo, and then selling the car and donating the money. But for what my intuitions are worth I would flip the switch. I’d flip it if my car were worth a million times the cost per life saved, and I think I’d (at least) say some harsh things to anyone who so much as hesitates to flip the switch. I don’t think it follows from this that I should donate anything at all to AMF, but that’s another premise and conclusion.
The hypothetical here isn’t realistic unless you’re a billionaire. Here is a more realistic one: would you become an indentured servant for life in order to save the child? If your life time earning power is $5m, then becoming an indentured servent for life gets you less than 1⁄40 the way toward paying a million times the cost of saving a child.
Or, not merely more realistic, but actually having real instances: would you throw your whole productive life into earning as much money as possible in order to give nearly all of it away to the cause of saving children?
Right, that’s better. In that case, I wouldn’t say one has a moral obligation even to save the local child.
Your last example is actually weaker than it could be. Even though it’s completely equivalent, a better way to phrase this is the following:
The train is currently rushing to kill the child, and you’re not part of this situation. You, sitting in your car far away, see this happening. You now have the choice to drive up to the tracks and leave your car on the tracks. This will save the child but destroy your car.
Now it’s clear that you weren’t part of the situation to begin with; you’re just a distant observer who may choose to intervene.
I don’t follow why leaving your car on the tracks prevents the child from being killed.
The same reason fat people can derail trolleys and businesspeople have lifeguard abilities, I’d imagine.
New problem: should you spring for the train-derailing-self-destruct-with-an-ejection-seat option on your new car?
Well, if it costs anything like $2300, then...
Just always make sure you have a fat man as a passenger.
I wonder whether there’s a distancing effect going on there—it’s, apparently, easier to press a button and kill someone than to stab them in the neck and watch ’em die all gurgling—so I wonder whether we feel less inclined to press a button and save someone.
If you were driving your car, and the child was pushed out into the road in front of you, would you redirect your car into a ditch knowing that would write the car off?
An uninsured car, presumably, or you’re only out the inconvenience of replacing it.
I have trouble believing that any insurance will pay for a car that you deliberately divert a train into.
I’d be happy to insure you against this scenario.
While in the TED talk Singer may give the impression that he endorses the claim that one can save a life by donating $200 to a cost-effective charity, his actual position is more cautious, and less vulnerable to Jonah’s objection. In The Life You Can Save (p. 103), he writes:
For context, see this comment.
Dunno if someone has noticed this before, and dunno how relevant this is, but in the child-in-the-pond scenario, you’re the only person who could possibly save that child, whereas in the AMF scenario, anyone giving the same amount of money would save the same children.
Well, it’s certainly psychologically relevant, but I don’t know how relevant it is to Singer’s point.