It seems to me like people keep switching between the “shallow diminishing returns” and “steep diminishing returns” stories, combining claims that only make sense in one scenario with claims that only make sense in the other, instead of taking the disjunction seriously and trying to do some actual accounting. So I keep trying to explain the disjunction.
Could you give an example or two? I don’t mean of one person assuming shallow diminishing returns and another assuming steep diminishing returns—obviously different people may have different opinions—but of a single person doing the sort of combination you describe.
The actual article doesn’t, so far as I can see, at all focus on any such cases; it doesn’t say “look, here are some bogus arguments people make that assume two different incompatible things”; rather, it says “EA organizations say you should give money to EA causes because that way you can do a lot of good per unit money, but they are lying to you and you should do other things with your money instead”. (Not an actual quotation, of course, but I think a fair paraphrase.)
So I don’t understand how your defence here makes any sense as a defence of the actual article.
A couple of other points, while I have your attention.
All credit to you, once again, for linking to what GiveWell actually wrote. But … it seems to me that, while indeed they did use the words “fair share”, your description of their reasons doesn’t at all match what they say. Let me quote from it:
Over the past couple of weeks, we’ve had many internal discussions about how to reconcile the goals of (a) recommending as much giving as possible from Good Ventures to top charities, which we consider outstanding giving opportunities; (b) preserving long-run incentives for individuals to support these charities as well. The proposals that have come up mostly fit into one of three broad categories:
… and then the three categories are “funging”, “matching”, and “splitting”, and it’s in explaining what they mean by “splitting” that they use the words “fair share”. But the goal here, as they say it, is not at all to have everyone save a “fair share” of lives. They give some reasons for favouring “splitting” (tentatively and corrigibly) and those reasons have nothing to do with “fair shares”. Also, they never, btw, talk about a fair share of lives saved but of funding provided, and while of course those things are closely connected they are not intensionally equivalent and there is an enormous difference between “we favour an approach that can be summarized as ‘donors consider the landscape of donors and try to estimate their share of the funding gap, and give that much’” and “it would be bad if anyone saved more than their fair share of lives”.
Could you explain why you chose to describe GiveWell’s position by saying ‘they were worried that Good Ventures would be saving more than their “fair share” of lives’? Do you actually think that is an accurate description of GiveWell’s position?
----
A key step in your argument—though it seems like it’s simply taken the place of other entirely different key steps, with the exact same conclusion allegedly following from it, which as I mentioned above seems rather fishy—goes like this. “If one could do a great deal of good as efficiently as the numbers commonly thrown about imply, then it would be possible to run an experiment that would verify the effectiveness of the interventions, by e.g. completely eliminating malaria in one country. No one is running such an experiment, which shows that they really know those numbers aren’t real. On the other hand, if there’s only a smallish amount of such good to be done that efficiently, then EA organizations should be spending all their money on doing it, instead of whatever else they’re doing. But they aren’t, which again shows that they really know those numbers aren’t real. Either way, what they say is dishonest PR and you should do something else with your money.”
It looks to me as if basically every step in this argument is wrong. Maybe this is because I’m misunderstanding what you’re saying, or failing to see how the logic works. Let me lay out the things that look wrong to me; perhaps you can clarify.
The “great deal of good” branch: running experiments.
It doesn’t at all follow from “there is an enormous amount of good to be done at a rate of $5k per life-equivalent” that there are nice conclusive experiments like reducing malaria deaths to zero in one country for one year and measuring the cost. Many malaria deaths in a given year may be from infections in earlier years; even if a large fraction of malaria can be prevented at $5k per life-equivalent, the marginal cost will surely increase a lot as you get to the hardest cases; eliminating all malaria deaths somehere will probably require multiple different kinds of intervention, and any given organization has expertise only in a subset of them, and coordination is hard.
You might want (genuinely, or for rhetorical purposes, or both) EA charities’ money to be spent on running nice conclusive experiments, but that is no guarantee that that’s actually the most effective thing for them to be doing.
Still less is it a guarantee that they will see that it is. (It could be that running such an experiment is the best thing they could do because it would convince lots of people and open the floodgates for lots of donations, but that for one reason or another they don’t realise this.) So even if (1) there are nice conclusive experiments they could run and (2) that would actually be the best use of their money, that’s not enough to get from “they aren’t running the experiments” to “they know the results would be bad” or anything like that. They might just have an inaccurate model of what the consequences of the experiments would be. But, for the avoidance of doubt, I think #1 and #2 are both extremely doubtful too.
It’s not perfectly clear to me who is supposed to be running these experiments. In order to get to your conclusion that EA organizations like GiveWell are dishonest, it needs to be those organizations that could run them but don’t. But … I don’t think that’s how it works? GiveWell doesn’t have any expertise in running malaria-net experiments. An organization like AMF could maybe run them (but see above: most likely it would actually take lots of different organizations working together to get the sort of clear-cut answers you want) but it isn’t AMF that’s making the cost-per-life-equivalent claims you object to, and GiveWell doesn’t have the power to force AMF to burn a large fraction of its resources on running an experiment that (for whatever reason) it doesn’t see as the best use of those resources. (You mention the Gates Foundation as well, but they don’t seem actually relevant here.)
The “smallish amount of good” branch: what follows?
If I understand your argument here correctly (which I may well not; for whatever reason, I find all your comments on this point hard to understand), you reckon that if there’s (say) $100M worth of $5k-per-life-equivalent good to do, then GiveWell should just get Good Ventures to do it and move on.
As you know, they have given some reasons for not doing that (the reasons I think you mischaracterized in terms of ‘saving more than their “fair share” of lives’).
I think your position is: what they’re doing is deliberately not saving lives in order to keep having an attractive $5k-per-life-equivalent figure to dangle in front of donors, which means that if you give $5k in the hope of doing one life-equivalent of good then you’re likely actually just reducing the amount GiveWell will get Good Ventures to contribute by $5k, so even if the marginal cost really is $5k per life-equivalent then you aren’t actually getting that life-equivalent because of GiveWell’s policies. (I’m not at all sure I’m understanding you right on this point, though.)
Whether or not it’s your position, I think it’s a wrong position unless what GiveWell have said about this is outright lies. When discussing the “splitting” approach they end up preferring, they say this: ‘But they [sc. incentives for individual donors] are neutral, provided that the “fair share” is chosen in a principled way rather than as a response to the projected behavior of the other funder.’ (Emphasis mine.) And: ‘we’ve chosen 50% largely because we don’t want to engineer – or appear to be engineering – the figure around how much we project that individuals will give this year (which would create the problematic incentives associated with “funging” approaches).’
Incidentally, they also say this: ‘For the highest-value giving opportunities, we want to recommend that Good Ventures funds 100%. It is more important to us to ensure these opportunities are funded than to set incentives appropriately.’ So for those “highest-value” cases, at least, they are doing exactly what you complain they are not doing.
A separate consideration: the most effective things for a large organization to fund may not be the same things that are most effective for individual donors to fund. E.g., there may be long-term research projects that only make sense if future support is guaranteed. I think the Gates Foundation does quite a bit of this sort of thing, which is another reason why I think you’re wrong to bring them in as (implicitly) an example of an organization that obviously would be giving billions for malaria nets if they were really as effective as the likes of GiveWell say they are.
Suppose it turns out that the widely-touted figures for what it costs to do one life-equivalent of good are, in fact, somewhat too low. Maybe the right figure is $15k/life instead of $5k/life, or something like that. And suppose it turns out that GiveWell and similar organizations know this and are publicizing smaller numbers because they think it will produce more donations. Does it follow that we can’t do a lot of good without a better and more detailed model of the relevant bit of the world than we can realistically obtain, and that we should all abandon EA and switch to “taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits”? I don’t see that it does: to make EA a bad “investment” it seems to me that it has to be much wronger than you’ve given any reason to think it is likely to be. (Jeff K has said something similar in comments to the original article, but you didn’t respond.)
It doesn’t at all follow from “there is an enormous amount of good to be done at a rate of $5k per life-equivalent” that there are nice conclusive experiments like reducing malaria deaths to zero in one country for one year and measuring the cost. Many malaria deaths in a given year may be from infections in earlier years; even if a large fraction of malaria can be prevented at $5k per life-equivalent, the marginal cost will surely increase a lot as you get to the hardest cases; eliminating all malaria deaths somehere will probably require multiple different kinds of intervention, and any given organization has expertise only in a subset of them, and coordination is hard.
It would be helpful if you actually described the specific quantitative scenario you have in mind here, instead of simply asserting that one exists. What proportion of malaria deaths do you think are from infection in prior years? (Bednets disproportionately save the lives of young children.) How many years does that mean we should expect such an experiment would need to be funded? What percentage of malaria deaths do you think can be prevented at ~$5000 per life saved? What’s the implied maximum effect size at that cost (and at $10k per life saved) in a well-defined area like Madagascar, and what would be the total cost of running such an experiment?
I think you have the burden of proof in the wrong place. You are claiming that if there’s a lot of good to be done at $5k then there must be experiments that are obviously worth pouring a lot of resources into. I’m simply saying that that’s far from clear, for the reasons I gave. If it turns out that actually further details of the situation are such as to mean that there must be good experiments to do, then your argument needs to appeal to those further details and explain how they lead to that conclusion.
I am not making any specific claim about what fraction of malaria deaths are from infection in prior years, or what proportion can be prevented at ~$5k per life-equivalent, etc. To whatever extent those are relevant to the correctness of your claim that EA organizations would be running the sort of experiments you propose if they really believed their numbers, your argument for that claim should already be in terms of those figures.
I’m pointing out what seem to me to be large and important holes in your argument.
To an objection of the form “You have given no good reason to think Y follows from X”, it is not reasonable to respond with “You need to give a specific example of how you can have X and not Y, with realistic numbers in it”.
I claim that you have given no reason to think that if there’s a lot of good to be done at $5k per life-equivalent then there is necessarily an experiment that it’s feasible for (say) GiveWell to conduct that would do something like eliminating all malaria deaths in Madagascar for a year. You’ve just said that obviously there must be.
I reject any norms that say that in that situation anyone saying that your reasoning has gaps in is obliged to show concrete counterexamples.
However, because I’m an obliging sort of chap, let’s have a go at constructing one and see what happens. (But, for the avoidance of doubt, I am not conceding that if my specific counterexample turns out not to work then it means your claim is right and mine is wrong. Of course it’s possible that you know ahead of time that I can’t construct a working counterexample, on account of having a better understanding than mine of the situation—but, again, in that case communicating that better understanding should be part of your argument.) I’ll look at Madagascar since that’s the country you mentioned specifically.
[EDITED to add:] Although the foregoing paragraph talks about “constructing a counterexample”, in fact what I did in the following paragraphs is just to make some guesses about numbers and see where they lead; I wasn’t trying to pick numbers that are maximally persuasive or anything.
So, first of all let’s find some numbers. Madagascar has a population of about 26 million. Malaria is the 7th most common cause of death there. If I’m reading the stats correctly, about 10% of the population has malaria and they have about 6k deaths per year. Essentially the entire population is considered at risk. At present Madagascar gets about $50M/year of malaria-fighting from the rest of the world. Insecticide-treated bed nets allegedly reduce the risk of getting malaria by ~70% compared with not having them; it’s not clear to me how that’s defined, but let’s suppose it’s per year. The statistics I’ve seen differ somewhat in their estimates of what fraction of the Madagascan population has access to bed nets; e.g., in this document from the WHO plot E on page 85 seems to show only ~5% of the population with access to either bed nets or indoor spraying; the table on page 117 says 6%; but then another table on page 122 estimates ~80% of households have at least one net and ~44% have at least one per two people. I guess maybe most Madagascan households have a great many people? These figures are much lower in Madagascar than in most of Africa; I don’t know why. It seems reasonable to guess that bed net charities expect it to be more expensive, more difficult or less effective in Madagascar than in the other places where they have distributed more nets, but again even if this is correct I don’t know what the underlying reasons are. I observe that several African countries have a lot more malaria deaths per unit population; e.g., Niger has slightly fewer people than Madagascar but nearly 3x as many malaria deaths. (And also about 3x as many people with malaria.) So maybe bed net distribution focuses on those countries?
So, my first observation is that this is all consistent with the possbility that the number of lives saveable in Madagascar at ~$5k/life is zero, because of some combination of { lower prevalence of malaria, higher cost of distributing nets, lower effectiveness of nets } there compared with, say, Niger or the DRC. This seems like the simplest explanation of the fact that Madagascar has surprisingly few bed nets per person, and it seems consistent with the fact that, while it certainly has a severe malaria problem, it has substantially less malaria per person than many other African countries. Let’s make a handwavy guess that the effectiveness per dollar of bednets in Madagascar is half what it is in the countries with the best effectiveness-per-dollar opportunities, which conditional on that $5k/life-equivalent figure would mean $10k/life-equivalent.
Now, as to fatality: evidently the huge majority of people with malaria do not die in any given year. (~2.5M cases, ~6k deaths.) Malaria is a serious disease even when it doesn’t kill you. Back of envelope: suppose deaths from malaria in Madagascar cost 40 QALYs each (life expectancy in Madagascar is ~66y, many malaria deaths are of young children but not all, there’s a lot of other disease in Madagascar and I guess quality of life is often poor, handwave handwave; 40 QALYs seems like the right ballpark) and suppose having malaria but not dying costs 0.05 QALYs per year (it puts you completely out of action some of the time, makes you feel ill a lot more of the time, causes mental distress, sometimes does lasting organ damage, etc.; again I’m making handwavy estimates). Then every year Madagascar loses ~125k QALYs to nonfatal malaria and ~240k QALYs to fatal malaria. Those numbers are super-inexact and all I’m really comfortable concluding here is that the two are comparable. I guess (though I don’t know) that bednets are somewhere around equally effective in keeping adults and children from getting malaria, and that there isn’t any correlation between preventability-by-bednet and severity in any particular case; so I expect the benefits of bednets in death-reduction and other-illness-reduction to, again, be comparable. I believe death, when it occurs, is commonly soon after infection, but the other effects commonly persist for a long time. I’m going to guess that 3⁄4 of the effects of a change in bednet use happen within ~ a year, with a long tail for the rest.
So, let’s put that together a bit. Most of the population is not currently protected by bednets. If they suddenly were then we might expect a ~70% reduction in new malaria cases that year, for those protected by the nets. Best case, that might mean a ~70% reduction in malaria deaths that year; presumably the actual figure is a bit less because some malaria deaths happen longer after infection. Call it 60%. Reduction in malaria harm that year would be more like 50%. Cost would be $10k per life-equivalent saved. Total cost somewhere on the order of $50M, a substantial fraction of e.g. AMF’s total assets.
Another way to estimate the cost: GiveWell estimates that AMF’s bednet distribution costs somewhere around $4.50 per net. So one net per person in Madagascar is $100M or so.
But that’s only ~60% of the deaths; you wanted a nice clear-cut experiment that got rid of all the malaria deaths in Madagascar for one year. And indeed cutting deaths by 60% would not necessarily be conclusive, because the annual variation in malaria cases in Madagascar seems to be large and so is the uncertainty in counting those cases. In the 2010-2017 period the point estimates in the document I linked above have been as low as ~2200 and as high as ~7300; the error bars each year go from just barely above zero to nearly twice the point estimate. (These uncertainties are much larger, incidentally, than in many other African countries with similar malaria rates, which seems consistent with there being something about Madagascar that makes treatment and/or measurement harder than other African countries.)
To get rid of all (or nearly all) the deaths in one year, presumably you need to eliminate infection that happens while people aren’t sleeping under their bed nets, and to deal with whatever minority of people are unwilling or unable to use bed nets. Those seem like harder problems. I think countries that have eliminated malaria have done it by eliminating the mosquitoes that spread it, which is a great long-term solution if you can do it but much harder than distributing bed nets. So my best guess is that if you want to get rid of all the malaria, even for one year, you will have to spend an awful lot more per life-equivalent saved that year; I would be unsurprised by 10x as much, not that surprised by 100x, and not altogether astonished if it turned out that no one actually knows how to do it for any amount of money. It might still be worth it if the costs are large—the future effects are large if you can eliminate malaria from a place permanently. (Which might be easier in Madagascar than in many other African countries, since it’s an island.) But it puts the costs out of the range of “things existing EA charities could easily do to prove a point”. And it’s a Gates Foundation sort of project, not an AMF one, and indeed as I understand it the Gates Foundation is putting a lot of money into investigating ways to eliminate malaria.
Tentative conclusion: It’s not a all obvious to me that this sort of experiment would be worth while. For “only” an amount of money comparable to the total assets of the Against Malaria Foundation, it looks like it might be possible to somewhat-more-than-halve malaria deaths in Madagascar for one year (and reduce ongoing malaria a bit in subsequent years). The expected benefits of doing this would be substantially less than those of distributing bed nets in the probably-more-cost-effective other places where organizations like AMF are currently putting them. Given how variable the prevalence of malaria is in Madagascar, and how uncertain the available estimates of that prevalence seem to be, it is not clear that doing this would be anything like conclusive evidence that bednet distribution is as effective as it’s claimed to be. (All of the foregoing is conditional on the assumption that it is as effective as claimed.) To get such conclusive evidence, it would be necessary to do things radically different from, and probably far more expensive than, bednet distribution; organizations like AMF would have neither the expertise nor the resources to do that.
I am not very confident about any of the numbers above (other than “easy” ones like the population of Madagascar), and all my calculations are handwavy estimates (because there’s little point doing anything more careful when the underlying numbers are so doubtful). But what those calculations suggest to me is that, whether or not doing the sort of experiment you propose would be a good idea, it doesn’t seem to be an obviously good idea (since, in particular, my current best estimate is that it would not be a good idea). Therefore, unless I am shown compelling evidence pointing in a different direction, I cannot take seriously the claim that EA organizations that aren’t doing such experiments show thereby that they don’t believe that there is large scope for doing good at a price on the order of $5k per life-equivalent.
You’ve given a lot of details specifically about Madagascar, but not actually responded to the substantive argument in the post. What global picture does this correspond to, under which the $5k per life saved figure is still true and meaningful? I don’t see how the existence of somewhere for which no lives can be saved for $5k makes that claim any more plausible.
Your claim, as I understood it—which maybe I didn’t, because you have been frustratingly vague about your own argument at the same time as demanding ever-increasing amounts of detail from anyone who questions it—was that if the $5k-per-life-equivalent figure were real then there “should” be some experiment that could be done “in a well-defined area like Madagascar” that would be convincing enough to be a good use of the (large) resources it would cost.
I suggest that the scenario I described above is obviously consistent with a $5k-per-life-equivalent figure in the places where bednets are most effective per unit spent. I assume you picked Madagascar because (being isolated, fairly small, etc.) it would be a good place for an experiment.
If you think it is not credible that any global picture makes the $5k figure “true and meaningful” then it is up to you to give a good argument for that. So far, it seems to me that you have not done so; you have asserted that if it were true then EA organizations should be running large-scale experiments to prove it, but you haven’t offered any credible calculations or anything to show that if the $5k figure were right then doing such experiments would be a good use of the available resources, and my back-of-envelope calculations above suggest that in the specific place you proposed, namely Madagascar, they quite likely wouldn’t be.
Perhaps I’m wrong. I often am. But I think you need to provide more than handwaving here. Show us your detailed models and calculations that demonstrate that if the $5k figure is anywhere near right then EA organizations should be acting very differently from how they actually are acting. Stop making grand claims and then demanding that other people do the hard work of giving quantitative evidence that you’re wrong, when you yourself haven’t done the hard work of giving quantitative evidence that you’re right.
Once again I say: what you are doing here is not what arguing in good faith usually looks like.
Also, they never, btw, talk about a fair share of lives saved but of funding provided, and while of course those things are closely connected they are not intensionally equivalent and there is an enormous difference between “we favour an approach that can be summarized as ‘donors consider the landscape of donors and try to estimate their share of the funding gap, and give that much’” and “it would be bad if anyone saved more than their fair share of lives”.
In the context of a discussion about how much money to give to a specified set of nonprofits, where no other decisions are being discussed other than how to decide how much money to give, what is the difference?
It’s a bit like the difference between “Ben thinks Gareth is giving too much money to the Against Malaria Foundation” and “Ben thinks Gareth isn’t letting enough babies die of malaria”, in the context of a discussion about how individuals should allocate their money.
Could you give an example or two? I don’t mean of one person assuming shallow diminishing returns and another assuming steep diminishing returns—obviously different people may have different opinions—but of a single person doing the sort of combination you describe.
I think Scott’s doing that here, switching back and forth between a steep diminishing returns story (where Good Ventures is engaged in at the very least intertemporal funging as a matter of policy, so giving to one of their preferred charities doesn’t have straightforward effects) and a claim that “you or I, if we wanted to, could currently donate $5000 (with usual caveats) and save a life.”
The more general pattern is people making nonspecific claims that some number is “true.” I’m claiming that if you try to make it true in some specific sense, you have to posit some weird stuff that should be strongly decision-relevant.
So I assume you’re objecting to his statement near the end that “the estimate that you can save a life for $5000 remains probably true (with normal caveats about uncertainty)”, on the basis that he should actually say “you probably can’t really save a life for $5000 because if you give that $5000 then the actual result will be that Good Ventures gives less in future because GiveWell will make sure of that to ensure that alleged $5000 opportunities continue to exist for PR reasons”.
But I don’t see the alleged switching back and forth. So far as I can see, Scott simply disagrees with you about the intertemporal funging thing, perhaps for the same reason as I think I do (namely, that GiveWell’s actual statements about their recommendations to Good Ventures specifically claim that they are trying to make them in a way that doesn’t involve intertemporal funging of a sort that messes up incentives in the way you say it does).
Where do you think Scott’s comment assumes the “steep diminishing returns story”?
It does tell a steep-diminishing-returns story about the specific idea of trying to run the sort of experiment you propose. But part of his point is that that sort of experiment would likely be inefficient and impractical, unlike just continuing to do what AMF and similar charities are already doing with whatever funding is available to them. The diminishing returns are different in the two scenarios, and it could be that they are much steeper if you decide that your goal is to eliminate all malaria deaths on Madagascar than if your goal is to reduce malaria in all the areas where there’s a lot of malaria that can be addressed via bed nets. It can simultaneously be true that (1) there are readily available opportunities to save more than 6k extra lives by distributing more bed nets, at a cost of $5k per life saved, and that (2) if instead you want to save specifically all 6k people who would otherwise have died from malaria in Madagascar this year, then it will cost hugely more than $5k per life. And also, relatedly, that (3) if instead of this vague “you” we start trying to be specific about who is going to do the thing, then in case 1 the answer is that AMF can save those lives by distributing bed nets, a specific thing that it knows how to do well, whereas in case 2 the answer is that there is no organization that has all the competences required to save all those lives at once, and that making it happen would require a tremendous feat of coordination.
It seems to me like people keep switching between the “shallow diminishing returns” and “steep diminishing returns” stories, combining claims that only make sense in one scenario with claims that only make sense in the other, instead of taking the disjunction seriously and trying to do some actual accounting. So I keep trying to explain the disjunction.
Could you give an example or two? I don’t mean of one person assuming shallow diminishing returns and another assuming steep diminishing returns—obviously different people may have different opinions—but of a single person doing the sort of combination you describe.
The actual article doesn’t, so far as I can see, at all focus on any such cases; it doesn’t say “look, here are some bogus arguments people make that assume two different incompatible things”; rather, it says “EA organizations say you should give money to EA causes because that way you can do a lot of good per unit money, but they are lying to you and you should do other things with your money instead”. (Not an actual quotation, of course, but I think a fair paraphrase.)
So I don’t understand how your defence here makes any sense as a defence of the actual article.
A couple of other points, while I have your attention.
----
The article says this:
All credit to you, once again, for linking to what GiveWell actually wrote. But … it seems to me that, while indeed they did use the words “fair share”, your description of their reasons doesn’t at all match what they say. Let me quote from it:
… and then the three categories are “funging”, “matching”, and “splitting”, and it’s in explaining what they mean by “splitting” that they use the words “fair share”. But the goal here, as they say it, is not at all to have everyone save a “fair share” of lives. They give some reasons for favouring “splitting” (tentatively and corrigibly) and those reasons have nothing to do with “fair shares”. Also, they never, btw, talk about a fair share of lives saved but of funding provided, and while of course those things are closely connected they are not intensionally equivalent and there is an enormous difference between “we favour an approach that can be summarized as ‘donors consider the landscape of donors and try to estimate their share of the funding gap, and give that much’” and “it would be bad if anyone saved more than their fair share of lives”.
Could you explain why you chose to describe GiveWell’s position by saying ‘they were worried that Good Ventures would be saving more than their “fair share” of lives’? Do you actually think that is an accurate description of GiveWell’s position?
----
A key step in your argument—though it seems like it’s simply taken the place of other entirely different key steps, with the exact same conclusion allegedly following from it, which as I mentioned above seems rather fishy—goes like this. “If one could do a great deal of good as efficiently as the numbers commonly thrown about imply, then it would be possible to run an experiment that would verify the effectiveness of the interventions, by e.g. completely eliminating malaria in one country. No one is running such an experiment, which shows that they really know those numbers aren’t real. On the other hand, if there’s only a smallish amount of such good to be done that efficiently, then EA organizations should be spending all their money on doing it, instead of whatever else they’re doing. But they aren’t, which again shows that they really know those numbers aren’t real. Either way, what they say is dishonest PR and you should do something else with your money.”
It looks to me as if basically every step in this argument is wrong. Maybe this is because I’m misunderstanding what you’re saying, or failing to see how the logic works. Let me lay out the things that look wrong to me; perhaps you can clarify.
The “great deal of good” branch: running experiments.
It doesn’t at all follow from “there is an enormous amount of good to be done at a rate of $5k per life-equivalent” that there are nice conclusive experiments like reducing malaria deaths to zero in one country for one year and measuring the cost. Many malaria deaths in a given year may be from infections in earlier years; even if a large fraction of malaria can be prevented at $5k per life-equivalent, the marginal cost will surely increase a lot as you get to the hardest cases; eliminating all malaria deaths somehere will probably require multiple different kinds of intervention, and any given organization has expertise only in a subset of them, and coordination is hard.
You might want (genuinely, or for rhetorical purposes, or both) EA charities’ money to be spent on running nice conclusive experiments, but that is no guarantee that that’s actually the most effective thing for them to be doing.
Still less is it a guarantee that they will see that it is. (It could be that running such an experiment is the best thing they could do because it would convince lots of people and open the floodgates for lots of donations, but that for one reason or another they don’t realise this.) So even if (1) there are nice conclusive experiments they could run and (2) that would actually be the best use of their money, that’s not enough to get from “they aren’t running the experiments” to “they know the results would be bad” or anything like that. They might just have an inaccurate model of what the consequences of the experiments would be. But, for the avoidance of doubt, I think #1 and #2 are both extremely doubtful too.
It’s not perfectly clear to me who is supposed to be running these experiments. In order to get to your conclusion that EA organizations like GiveWell are dishonest, it needs to be those organizations that could run them but don’t. But … I don’t think that’s how it works? GiveWell doesn’t have any expertise in running malaria-net experiments. An organization like AMF could maybe run them (but see above: most likely it would actually take lots of different organizations working together to get the sort of clear-cut answers you want) but it isn’t AMF that’s making the cost-per-life-equivalent claims you object to, and GiveWell doesn’t have the power to force AMF to burn a large fraction of its resources on running an experiment that (for whatever reason) it doesn’t see as the best use of those resources. (You mention the Gates Foundation as well, but they don’t seem actually relevant here.)
The “smallish amount of good” branch: what follows?
If I understand your argument here correctly (which I may well not; for whatever reason, I find all your comments on this point hard to understand), you reckon that if there’s (say) $100M worth of $5k-per-life-equivalent good to do, then GiveWell should just get Good Ventures to do it and move on.
As you know, they have given some reasons for not doing that (the reasons I think you mischaracterized in terms of ‘saving more than their “fair share” of lives’).
I think your position is: what they’re doing is deliberately not saving lives in order to keep having an attractive $5k-per-life-equivalent figure to dangle in front of donors, which means that if you give $5k in the hope of doing one life-equivalent of good then you’re likely actually just reducing the amount GiveWell will get Good Ventures to contribute by $5k, so even if the marginal cost really is $5k per life-equivalent then you aren’t actually getting that life-equivalent because of GiveWell’s policies. (I’m not at all sure I’m understanding you right on this point, though.)
Whether or not it’s your position, I think it’s a wrong position unless what GiveWell have said about this is outright lies. When discussing the “splitting” approach they end up preferring, they say this: ‘But they [sc. incentives for individual donors] are neutral, provided that the “fair share” is chosen in a principled way rather than as a response to the projected behavior of the other funder.’ (Emphasis mine.) And: ‘we’ve chosen 50% largely because we don’t want to engineer – or appear to be engineering – the figure around how much we project that individuals will give this year (which would create the problematic incentives associated with “funging” approaches).’
Incidentally, they also say this: ‘For the highest-value giving opportunities, we want to recommend that Good Ventures funds 100%. It is more important to us to ensure these opportunities are funded than to set incentives appropriately.’ So for those “highest-value” cases, at least, they are doing exactly what you complain they are not doing.
A separate consideration: the most effective things for a large organization to fund may not be the same things that are most effective for individual donors to fund. E.g., there may be long-term research projects that only make sense if future support is guaranteed. I think the Gates Foundation does quite a bit of this sort of thing, which is another reason why I think you’re wrong to bring them in as (implicitly) an example of an organization that obviously would be giving billions for malaria nets if they were really as effective as the likes of GiveWell say they are.
Suppose it turns out that the widely-touted figures for what it costs to do one life-equivalent of good are, in fact, somewhat too low. Maybe the right figure is $15k/life instead of $5k/life, or something like that. And suppose it turns out that GiveWell and similar organizations know this and are publicizing smaller numbers because they think it will produce more donations. Does it follow that we can’t do a lot of good without a better and more detailed model of the relevant bit of the world than we can realistically obtain, and that we should all abandon EA and switch to “taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits”? I don’t see that it does: to make EA a bad “investment” it seems to me that it has to be much wronger than you’ve given any reason to think it is likely to be. (Jeff K has said something similar in comments to the original article, but you didn’t respond.)
It would be helpful if you actually described the specific quantitative scenario you have in mind here, instead of simply asserting that one exists. What proportion of malaria deaths do you think are from infection in prior years? (Bednets disproportionately save the lives of young children.) How many years does that mean we should expect such an experiment would need to be funded? What percentage of malaria deaths do you think can be prevented at ~$5000 per life saved? What’s the implied maximum effect size at that cost (and at $10k per life saved) in a well-defined area like Madagascar, and what would be the total cost of running such an experiment?
I think you have the burden of proof in the wrong place. You are claiming that if there’s a lot of good to be done at $5k then there must be experiments that are obviously worth pouring a lot of resources into. I’m simply saying that that’s far from clear, for the reasons I gave. If it turns out that actually further details of the situation are such as to mean that there must be good experiments to do, then your argument needs to appeal to those further details and explain how they lead to that conclusion.
I am not making any specific claim about what fraction of malaria deaths are from infection in prior years, or what proportion can be prevented at ~$5k per life-equivalent, etc. To whatever extent those are relevant to the correctness of your claim that EA organizations would be running the sort of experiments you propose if they really believed their numbers, your argument for that claim should already be in terms of those figures.
As you point out, you’re making entirely nonspecific claims. This is a waste of everyone’s time; please stop doing so here.
I’m pointing out what seem to me to be large and important holes in your argument.
To an objection of the form “You have given no good reason to think Y follows from X”, it is not reasonable to respond with “You need to give a specific example of how you can have X and not Y, with realistic numbers in it”.
I claim that you have given no reason to think that if there’s a lot of good to be done at $5k per life-equivalent then there is necessarily an experiment that it’s feasible for (say) GiveWell to conduct that would do something like eliminating all malaria deaths in Madagascar for a year. You’ve just said that obviously there must be.
I reject any norms that say that in that situation anyone saying that your reasoning has gaps in is obliged to show concrete counterexamples.
However, because I’m an obliging sort of chap, let’s have a go at constructing one and see what happens. (But, for the avoidance of doubt, I am not conceding that if my specific counterexample turns out not to work then it means your claim is right and mine is wrong. Of course it’s possible that you know ahead of time that I can’t construct a working counterexample, on account of having a better understanding than mine of the situation—but, again, in that case communicating that better understanding should be part of your argument.) I’ll look at Madagascar since that’s the country you mentioned specifically.
[EDITED to add:] Although the foregoing paragraph talks about “constructing a counterexample”, in fact what I did in the following paragraphs is just to make some guesses about numbers and see where they lead; I wasn’t trying to pick numbers that are maximally persuasive or anything.
So, first of all let’s find some numbers. Madagascar has a population of about 26 million. Malaria is the 7th most common cause of death there. If I’m reading the stats correctly, about 10% of the population has malaria and they have about 6k deaths per year. Essentially the entire population is considered at risk. At present Madagascar gets about $50M/year of malaria-fighting from the rest of the world. Insecticide-treated bed nets allegedly reduce the risk of getting malaria by ~70% compared with not having them; it’s not clear to me how that’s defined, but let’s suppose it’s per year. The statistics I’ve seen differ somewhat in their estimates of what fraction of the Madagascan population has access to bed nets; e.g., in this document from the WHO plot E on page 85 seems to show only ~5% of the population with access to either bed nets or indoor spraying; the table on page 117 says 6%; but then another table on page 122 estimates ~80% of households have at least one net and ~44% have at least one per two people. I guess maybe most Madagascan households have a great many people? These figures are much lower in Madagascar than in most of Africa; I don’t know why. It seems reasonable to guess that bed net charities expect it to be more expensive, more difficult or less effective in Madagascar than in the other places where they have distributed more nets, but again even if this is correct I don’t know what the underlying reasons are. I observe that several African countries have a lot more malaria deaths per unit population; e.g., Niger has slightly fewer people than Madagascar but nearly 3x as many malaria deaths. (And also about 3x as many people with malaria.) So maybe bed net distribution focuses on those countries?
So, my first observation is that this is all consistent with the possbility that the number of lives saveable in Madagascar at ~$5k/life is zero, because of some combination of { lower prevalence of malaria, higher cost of distributing nets, lower effectiveness of nets } there compared with, say, Niger or the DRC. This seems like the simplest explanation of the fact that Madagascar has surprisingly few bed nets per person, and it seems consistent with the fact that, while it certainly has a severe malaria problem, it has substantially less malaria per person than many other African countries. Let’s make a handwavy guess that the effectiveness per dollar of bednets in Madagascar is half what it is in the countries with the best effectiveness-per-dollar opportunities, which conditional on that $5k/life-equivalent figure would mean $10k/life-equivalent.
Now, as to fatality: evidently the huge majority of people with malaria do not die in any given year. (~2.5M cases, ~6k deaths.) Malaria is a serious disease even when it doesn’t kill you. Back of envelope: suppose deaths from malaria in Madagascar cost 40 QALYs each (life expectancy in Madagascar is ~66y, many malaria deaths are of young children but not all, there’s a lot of other disease in Madagascar and I guess quality of life is often poor, handwave handwave; 40 QALYs seems like the right ballpark) and suppose having malaria but not dying costs 0.05 QALYs per year (it puts you completely out of action some of the time, makes you feel ill a lot more of the time, causes mental distress, sometimes does lasting organ damage, etc.; again I’m making handwavy estimates). Then every year Madagascar loses ~125k QALYs to nonfatal malaria and ~240k QALYs to fatal malaria. Those numbers are super-inexact and all I’m really comfortable concluding here is that the two are comparable. I guess (though I don’t know) that bednets are somewhere around equally effective in keeping adults and children from getting malaria, and that there isn’t any correlation between preventability-by-bednet and severity in any particular case; so I expect the benefits of bednets in death-reduction and other-illness-reduction to, again, be comparable. I believe death, when it occurs, is commonly soon after infection, but the other effects commonly persist for a long time. I’m going to guess that 3⁄4 of the effects of a change in bednet use happen within ~ a year, with a long tail for the rest.
So, let’s put that together a bit. Most of the population is not currently protected by bednets. If they suddenly were then we might expect a ~70% reduction in new malaria cases that year, for those protected by the nets. Best case, that might mean a ~70% reduction in malaria deaths that year; presumably the actual figure is a bit less because some malaria deaths happen longer after infection. Call it 60%. Reduction in malaria harm that year would be more like 50%. Cost would be $10k per life-equivalent saved. Total cost somewhere on the order of $50M, a substantial fraction of e.g. AMF’s total assets.
Another way to estimate the cost: GiveWell estimates that AMF’s bednet distribution costs somewhere around $4.50 per net. So one net per person in Madagascar is $100M or so.
But that’s only ~60% of the deaths; you wanted a nice clear-cut experiment that got rid of all the malaria deaths in Madagascar for one year. And indeed cutting deaths by 60% would not necessarily be conclusive, because the annual variation in malaria cases in Madagascar seems to be large and so is the uncertainty in counting those cases. In the 2010-2017 period the point estimates in the document I linked above have been as low as ~2200 and as high as ~7300; the error bars each year go from just barely above zero to nearly twice the point estimate. (These uncertainties are much larger, incidentally, than in many other African countries with similar malaria rates, which seems consistent with there being something about Madagascar that makes treatment and/or measurement harder than other African countries.)
To get rid of all (or nearly all) the deaths in one year, presumably you need to eliminate infection that happens while people aren’t sleeping under their bed nets, and to deal with whatever minority of people are unwilling or unable to use bed nets. Those seem like harder problems. I think countries that have eliminated malaria have done it by eliminating the mosquitoes that spread it, which is a great long-term solution if you can do it but much harder than distributing bed nets. So my best guess is that if you want to get rid of all the malaria, even for one year, you will have to spend an awful lot more per life-equivalent saved that year; I would be unsurprised by 10x as much, not that surprised by 100x, and not altogether astonished if it turned out that no one actually knows how to do it for any amount of money. It might still be worth it if the costs are large—the future effects are large if you can eliminate malaria from a place permanently. (Which might be easier in Madagascar than in many other African countries, since it’s an island.) But it puts the costs out of the range of “things existing EA charities could easily do to prove a point”. And it’s a Gates Foundation sort of project, not an AMF one, and indeed as I understand it the Gates Foundation is putting a lot of money into investigating ways to eliminate malaria.
Tentative conclusion: It’s not a all obvious to me that this sort of experiment would be worth while. For “only” an amount of money comparable to the total assets of the Against Malaria Foundation, it looks like it might be possible to somewhat-more-than-halve malaria deaths in Madagascar for one year (and reduce ongoing malaria a bit in subsequent years). The expected benefits of doing this would be substantially less than those of distributing bed nets in the probably-more-cost-effective other places where organizations like AMF are currently putting them. Given how variable the prevalence of malaria is in Madagascar, and how uncertain the available estimates of that prevalence seem to be, it is not clear that doing this would be anything like conclusive evidence that bednet distribution is as effective as it’s claimed to be. (All of the foregoing is conditional on the assumption that it is as effective as claimed.) To get such conclusive evidence, it would be necessary to do things radically different from, and probably far more expensive than, bednet distribution; organizations like AMF would have neither the expertise nor the resources to do that.
I am not very confident about any of the numbers above (other than “easy” ones like the population of Madagascar), and all my calculations are handwavy estimates (because there’s little point doing anything more careful when the underlying numbers are so doubtful). But what those calculations suggest to me is that, whether or not doing the sort of experiment you propose would be a good idea, it doesn’t seem to be an obviously good idea (since, in particular, my current best estimate is that it would not be a good idea). Therefore, unless I am shown compelling evidence pointing in a different direction, I cannot take seriously the claim that EA organizations that aren’t doing such experiments show thereby that they don’t believe that there is large scope for doing good at a price on the order of $5k per life-equivalent.
You’ve given a lot of details specifically about Madagascar, but not actually responded to the substantive argument in the post. What global picture does this correspond to, under which the $5k per life saved figure is still true and meaningful? I don’t see how the existence of somewhere for which no lives can be saved for $5k makes that claim any more plausible.
Your claim, as I understood it—which maybe I didn’t, because you have been frustratingly vague about your own argument at the same time as demanding ever-increasing amounts of detail from anyone who questions it—was that if the $5k-per-life-equivalent figure were real then there “should” be some experiment that could be done “in a well-defined area like Madagascar” that would be convincing enough to be a good use of the (large) resources it would cost.
I suggest that the scenario I described above is obviously consistent with a $5k-per-life-equivalent figure in the places where bednets are most effective per unit spent. I assume you picked Madagascar because (being isolated, fairly small, etc.) it would be a good place for an experiment.
If you think it is not credible that any global picture makes the $5k figure “true and meaningful” then it is up to you to give a good argument for that. So far, it seems to me that you have not done so; you have asserted that if it were true then EA organizations should be running large-scale experiments to prove it, but you haven’t offered any credible calculations or anything to show that if the $5k figure were right then doing such experiments would be a good use of the available resources, and my back-of-envelope calculations above suggest that in the specific place you proposed, namely Madagascar, they quite likely wouldn’t be.
Perhaps I’m wrong. I often am. But I think you need to provide more than handwaving here. Show us your detailed models and calculations that demonstrate that if the $5k figure is anywhere near right then EA organizations should be acting very differently from how they actually are acting. Stop making grand claims and then demanding that other people do the hard work of giving quantitative evidence that you’re wrong, when you yourself haven’t done the hard work of giving quantitative evidence that you’re right.
Once again I say: what you are doing here is not what arguing in good faith usually looks like.
In the context of a discussion about how much money to give to a specified set of nonprofits, where no other decisions are being discussed other than how to decide how much money to give, what is the difference?
It’s a bit like the difference between “Ben thinks Gareth is giving too much money to the Against Malaria Foundation” and “Ben thinks Gareth isn’t letting enough babies die of malaria”, in the context of a discussion about how individuals should allocate their money.
I think Scott’s doing that here, switching back and forth between a steep diminishing returns story (where Good Ventures is engaged in at the very least intertemporal funging as a matter of policy, so giving to one of their preferred charities doesn’t have straightforward effects) and a claim that “you or I, if we wanted to, could currently donate $5000 (with usual caveats) and save a life.”
The more general pattern is people making nonspecific claims that some number is “true.” I’m claiming that if you try to make it true in some specific sense, you have to posit some weird stuff that should be strongly decision-relevant.
So I assume you’re objecting to his statement near the end that “the estimate that you can save a life for $5000 remains probably true (with normal caveats about uncertainty)”, on the basis that he should actually say “you probably can’t really save a life for $5000 because if you give that $5000 then the actual result will be that Good Ventures gives less in future because GiveWell will make sure of that to ensure that alleged $5000 opportunities continue to exist for PR reasons”.
But I don’t see the alleged switching back and forth. So far as I can see, Scott simply disagrees with you about the intertemporal funging thing, perhaps for the same reason as I think I do (namely, that GiveWell’s actual statements about their recommendations to Good Ventures specifically claim that they are trying to make them in a way that doesn’t involve intertemporal funging of a sort that messes up incentives in the way you say it does).
Where do you think Scott’s comment assumes the “steep diminishing returns story”?
It does tell a steep-diminishing-returns story about the specific idea of trying to run the sort of experiment you propose. But part of his point is that that sort of experiment would likely be inefficient and impractical, unlike just continuing to do what AMF and similar charities are already doing with whatever funding is available to them. The diminishing returns are different in the two scenarios, and it could be that they are much steeper if you decide that your goal is to eliminate all malaria deaths on Madagascar than if your goal is to reduce malaria in all the areas where there’s a lot of malaria that can be addressed via bed nets. It can simultaneously be true that (1) there are readily available opportunities to save more than 6k extra lives by distributing more bed nets, at a cost of $5k per life saved, and that (2) if instead you want to save specifically all 6k people who would otherwise have died from malaria in Madagascar this year, then it will cost hugely more than $5k per life. And also, relatedly, that (3) if instead of this vague “you” we start trying to be specific about who is going to do the thing, then in case 1 the answer is that AMF can save those lives by distributing bed nets, a specific thing that it knows how to do well, whereas in case 2 the answer is that there is no organization that has all the competences required to save all those lives at once, and that making it happen would require a tremendous feat of coordination.