Is GiveWell.org the best charity (excluding SIAI)?
Update: I should’ve said “non-existential risk charity”, rather than specifically exclude SIAI. I’m having trouble articulating why I don’t want to give to an existential risk charity, so I’m going to think more deeply about it. This post is close to my source of discomfort, which is about the many highly uncertain assumptions necessary to motivate existential risk reduction. However, I couldn’t articulate this argument properly before, so it might not be the true source of my discomfort. I’ll keep thinking.
I received my first pay-cheque from my first job after getting my degree, so it’s time to start tithing. So I’ve been evalating which charity to donate to. I’d like to support the SIAI but I’m not currently convinced it’s the best-value charity in a dollars-per-life sense, once time-value of money discounting is applied. I’d like to discuss the best non-SIAI charity available.
By far the best source of information I’ve found is www.givewell.org. It was started by two hedge fund managers who were struck by the absence of rational charity evaluations, so decided that this was the most pressing problem they could work on.
Perhaps the clearest, deepest finding from the studies they pull together and discuss is that charity is hard. Spending money doesn’t automatically translate to doing good. It’s not even enough to have smart people who care and know a lot about the problem think of ideas, and then spend money doing them. There’s still a good chance the idea won’t work. So we need to be evaluating programs rigorously before we scale them up, and keep evaluating as we scale.
The bad news is that this isn’t how charity is usually done. Very few charities make convincing evaluations of their activities public, if they carry them out at all. The good news is that some of the programs that have been evaluated are very, very effective. So choosing a charity rationally is absolutely critical.
Let’s say you’re interested specifically in HIV/AIDS relief.[1] You could fund a program that mainly distributes Anti-Retroviral Therapy to HIV/AIDS patients, which has been estimated conservatively to cost $1494 per disability adjusted life-year (DALY). Alternatively, you could fund a condom distribution program, which has been estimated conservatively to cost $112 per DALY. Or, you could fund a program to prevent mother-to-child transmission, which has been estimated conservatively to cost $12 per DALY. So even within HIV/AIDS, funding the right program can make your donation two orders of magnitude more effective. By tithing 10% of my income every year for the next thirty years, I could have a bigger impact than a $25 million donation, if the person who placed that donation only did an okay job of choosing a charity.
GiveWell currently gives its top recommendation to VillageReach, a charity that seeks to improve logistics for vaccine delivery to remote communities. The evidence is less cut-and-dried than you’d ideally want, but it’s still compelling. They took vaccine rates up to 95%, and had very low stock-out rates for vaccines during the 4 year pilot project in Mozambique. They’re estimated to have spent about $200usd per life saved. Even if future projects are two or three times less efficient, you’re still saving a life for $600. Think about how little money that is. If you tithe, you can probably expect to save 10 lives a year. That’s massive.
Instead of donating directly to VillageReach, I’m going to just donate to GiveWell. They pool the funds they get and distribute them to their top charities, and I trust their analytic, evidence-based, largely utilitarian approach. Mostly, however, I think the work they’re doing gathering and distributing information about charities is critically important. If more charities actually competed on evidence of efficacy, the whole endeavour might be a lot different. Does anyone have any better suggestions?
[1] I don’t understand why people would want to help sufferers of one disease or condition specifically, instead of picking the lowest-hanging fruit, but apparently they do.
Seems sensible; note that according to GiveWell’s plan for 2011: top-level priorities :
so that GiveWell has room for more funding.
According to the most recent board meeting GiveWell has been (and will continue for the foreseeable future to) avoid soliciting the general public in order to avoid the appearance of conflict of interest in the eyes of people who are unfamiliar with the organization and looking for charity recommendations, but for people who are already sold on GiveWell’s mission and trust the staff to use funding wisely; giving directly to GiveWell makes sense. As you point out, in the past GiveWell has redistributed excess funds to its top rated charities.
I respect your intention, and I don’t want to hijack your thread. But if you feel like explaining, I’m interested in the details. Are your concerns about SIAI in particular, or all existential risks charities? For example, the Future of Humanity Institute at Oxford University accepts donations, and uses those donations, in part, to fund further research into the causes of existential risk, and avenues for reducing existential risk. There are also groups aimed at reducing the risk of global nuclear war, and the risk of biotech disasters.
A second question is how to allocate your research efforts. As you note, the huge differences between different charities’ efficiency mean that research into where to donate can have huge value (relative to e.g. spending more hours earning money). Do you plan to do further research as time passes? If so, what endeavors are on your list of “types of philanthropy worth looking into”?
A major problem of many future existential threat charities is evaluating whether they actually are reducing existential risk, or whether they will actually increase it. The evidence of history, for example, indicates that even the best foreign policy experts are not very good at evaluating a policy’s secondary effects and perverse incentives. The result that it is very hard to evaluate whether the net effect of spending money on what is supposed to be “reducing the risk of global thermonuclear war” will actually increase or decrease the risk of global thermonuclear war. The very same multipliers that leverage to massive utility on the assumption of intended consequences leverage to massive disutility on the assumption of perverse consequences.
On the other hand, it’s rather easy to evaluate the net value of a heavily-studied vaccination against an endemic disease, and you can be reasonably certain you’re not actually spreading the very disease you’re trying to fight.
This sounds more like a bias (people want to know they’re successful; ambiguity aversion; want a solid warm glow) than a consideration which could reasonably compete against the economies of scale in existential risk. The penalty for increased uncertainty owing to difficulty of measurement is simply not as large as the orders of magnitude of scope involved.
The orders of magnitude of scope can only matter if you know which way they fall. If a donation to the Nuclear Threat initiative increases the risk of global nuclear war (like a reduction in arsenals deceiving a leader into believing he can make a successful first strike), the orders of magnitude of negative result make it a vastly worse choice than burning a hundred-dollar bill just to see the pretty colors.
If something has a 1% chance of working and a .8% chance of backfiring, the expected utility is the same as what it would be if there was a .2% chance of working, assuming that the benefits and harms are equal and opposite. this has only reduced the expected utility be a factor of 5. Existential risk is so important that it is better than many other things by much more than a factor of 5.
Sure. Now, show me the detailed analysis as to how you got those very precise numbers of your specific proposed intervention in existential risk having a 1.00% chance of working and 0.80% chance of backfiring, instead of the opposite numbers of 0.80% working and 1.00% backfiring.
Because, see, if the odds are the second way, then the expected utility of your intervention is massively, massively negative. Existential risk is so important that while reducing it is better than many other things by much more than a factor of five, increasing it is much, much worse than many evils by much more than a factor of five.
The real universe has no “good intentions” exception, but there’s a cognitive bias which causes people to overestimate the likelihood that an act taken with good intentions will produce the intended good result and underestimate the risks of negative results. When uncorrected for in matters of existential risk, the result could be, because of the mathematics of scope, an unintentional atrocity.
Now, my back-of-the-envelope calculation is that SIAI doesn’t actually increase the risk of an unfriendly AI by actively trying to create friendly AI. There are so many people doing AI anyway, and the default result is so likely to be unfriendly, that SIAI is a decent choice of existential risk charity. If it succeeds, we have upside; if it creates an unfriendly AI, we were screwed anyway.
On the other hand, the Nuclear Threat Initiative is not merely fucking around with what has seemingly been shown to be a fairly stable system in a quest to achieve an endpoint that is itself unlikely to be stable (total nuclear disarmament; the official goal of the NPT), with all sorts of very-hard-to-calculate scenarios which mean it might on net increase risks of nuclear annihilation of humanity. No, it also might be increasing existential threat of, say, runaway greenhouse warming, by secondary discouraging effects on (for example) nuclear power production. There is nobody on the planet who understands human society and economics and power production and everything else involved well enough to say with any confidence whatsoever that a donation to the Nuclear Threat Initiative will have positive expected utility. All we have to go on is the good intentions of the NTI people, which is no more a guarantee than assurances from a local newspaper horoscope that “Your efforts will be rewarded.”
It can never be impossible to determine expected utility because probability is a function of the information that you have. Probability is in the mind; it is part of the map, not the territory.
If you do an expected utility calculation with the little information you know, what you calculate will be the expected utility. If you calculate a higher expected utility for donating then, by the VNM utility theorem, it is what you would prefer. However, it may still not be the right choice because, while donating might be preferable to not donating, learning more about the SIAI might have an even higher expected utility. However, once you have all the relevant information you can get, it is nonsensical say that you can’t calculate the true probability; probability is a quantification of your own knowledge.
Even if you believe that someone else knows something that you don’t, you must make a best guess (well, a best probability distribution of guesses) and make a judgment under uncertainty. People have brought up the possibilities of information being intentionally withheld and of wishful thinking. These are no excuse; account for them to the best of your ability and choose. No matter how unfair the set of evidence you receive is, there is always an optimal probability distribution over what it really means. This is what is used in expected utility calculations.
Yes, that’s what you do. And my analysis is that the best decision under the available uncertainty is that the probability of donating to NTI doing massive good is not distinguishable from the probability of it doing massive harm. The case for 1.0 vs. 0.8 is not any more convincing to me than the case for 0.8 vs. 1.0. Given a hundred questions on the level of whether the Nuclear Threat Initiative is a good thing to do or not, I would not expect my answers to have any more chance of being right than if I answered based entirely on the results of a fair coin. I would, as I said elsewhere in this discussion, take an even-money bet on either side of reality, in the fullness of time, proving the result either is massive weal or massive woe. The massiveness on either side is meaningless because both sides cancel out. The expected utility of a donation to the NTI is, by my estimates, accordingly zero.
Furthermore, I am of the opinion that the question is, given the current state of human knowledge, such that no human expert could do better than a fair coin, any more than any Babylonian expert in astronomy could say whether Mars or Sirius was the larger, despite the massive actual difference in their size. Anyone opining on whether the NTI is a good or bad idea is, in my opinion, just as foolish as Ptolemy opining on whether the Indian Ocean was enclosed by land in the south. I don’t know, you don’t know, nobody on Earth knows enough to privilege any hypothesis about the value of NTI above any other.
When you don’t know enough to privilege any particular hypothesis over any other, the sheer scale of the possible results doesn’t magically create a reason to act.
Your conclusion follows from your premises.
I find some of the description of your state of knowledge doubtful.
50% is a very specific probability. It is reasonable here because it is the prior for the truth of a statement. If there were truly no major pieces of evidence, it could also be your posterior. You may believe that. However, if there are any observations that constitute significant evidence, it is unlikely that they exactly balance out, though it is possible if there is sufficiently little evidence. Given the importance of this, finding out how exactly the pieces of evidence balance would be possible and extremely important in this case.
Yes, if there are any observations that do constitute significant evidence, they are unlikely to balance out. But when a question is of major potential importance, people tend to engage emotionally, which often causes them to take perfectly meaningless noise and interpret it as evidence with significance.
This general cognitive bias to overestimating significance of evidence on issues of importance is an important component of the mind-killing nature of politics. Having misinterpreted noise as evidence, people find it harder to believe that others can honestly evaluate the balance of evidence on such an important issue differently, and find the hypothesis that their opponents are evil more and more plausible, leading to fanaticism.
And, of course, the results of political fanaticism are often disastrous, which means the stakes are high, which means, of course, I may well be being pushed by my emotional reaction to the stakes to overestimate the significance of the evidence that people tend to overestimate the significance of evidence.
Even if there are many false claims of evidence, there could still be some real evidence. If you think that the chance that you could find evidence, which is the conjunction of evidence actually existing and it being findable, isn’t too low, than you could try to search for it. However, from what you said, it seems that this improbability lowers the expected utility enough that it you find it preferable to contribute to other causes. Is that your reasoning? Also, do you think that all this applies to the SIAI?
There is almost certainly real evidence at some level; human beings (and thus human society) are fundamentally deterministic physical systems. I don’t know any method to distinguish the evidence from the noise in the case of, for example, the Nuclear Threat Initiative . . . except handing the problem to a friendly superhuman intelligence. (Which probably will use some method other than the NTI’s to ending the existential threat of global thermonuclear war anyway, rendering such a search for evidence moot.)
It doesn’t apply to the SIAI, because I can’t think of an SIAI high-negative failure mode that isn’t more likely to happen in the absence of the SIAI. The SIAI might make a paperclip maximizer or a sadist . . . but I expect anybody trying to make AIs without taking the explicit care SIAI is using is at least as likely to do so by accident, and I think eventual development of AI is near-certain in the short term (the next thousand years, which against billions of years of existence is certainly the short term). Donations to SIAI accordingly come with an increase in existential threat avoidance (however small and hard-to-estimate the probability), but not an increase in existential threat creation (AI is coming anyway).
(So why haven’t I donated to SIAI? Akrasia. Which isn’t a good thing, but being able to identify it as such in the SIAI donation case at least increases my confidence that my anti-NTI argument isn’t just a rationalization of akrasia in that case.)
I was thinking more of human-comprehensible evidence when I said `evidence’, but you seem to have found that none of that exists.
I agree with your reasoning about the SIAI.
http://lesswrong.com/lw/3kl/optimizing_fuzzies_and_utilons_the_altruism_chip/ suggests a method for motivating oneself to donate. I haven’t tried this, but the poster found it quite effective.
We run into the Gambler’s Ruin pretty quickly when dealing with bets concerning existential risk reduction, so the assumption that the benefits and harms are equal and opposite seems questionable. Expected utility calculations need a lot of tweaks in cases like this.
I was not suggesting that this is the actual math; I was merely giving an example to show that the possibility of an existential risk reduction effort backfiring does not necessarily make it a bad idea to contribute.
Say I assign a 0.2% probability to a given intervention averting human extinction. If I assign it a 0.1% probability of bringing about extinction (which otherwise would not have occurred), then I’ve lost half the value of an intervention with a 0.2% probability of success and no risk of backfire. A 0.198% probability of extinction would leave a hundredth of the value.
Even at that point, it seems like quite a stretch to say that the best estimate of the Nuclear Threat Initiative’s existential risk impact is that it is 99% as likely to bring about existential catastrophe as to prevent it. And note that if the risk of backfire is to wipe out more orders of magnitude of x-risk reduction opportunity, the positives and negatives need to be very finely balanced:
If an x-risk reduction intervention has substantially greater probability of averting than producing existential risk, then it’s a win.
If an x-risk reduction intervention has greater probability of producing than averting existential risk, then that means that preventing its use is itself an x-risk intervention with good expected value.
To be neutral, the probability of backfire must be in a very narrow range.
Also, as Anna notes, uncertainty on the numbers at this scale leads to high value of information.
If you assigned a 0.2% probability to a social intervention producing a specific result, I’d mostly be highly skeptical that you have enough, good enough, data to put that precise a number on it. Once probabilities get small enough, they’re too small for the human brain to accurately estimate them.
To be neutral in reality, yes, the probability must be in a very narrow range. To be neutral within the ability of a human brain to evaluate without systematic quantitative study, it just needs to be small enough that you can’t really tell if you’re in case 1 or case 2.
Do you mean that people tend to be poorly calibrated? You might mean that events or statements to which people assign 0.2% probability happen more often than that. Or you might mean that they happen less often. But either way one should then shift one’s probability estimates to take that information into account.
Or do you mean that such a number would be unstable in response to new information, more thinking about it, getting info on how priming and biases affect estimation (obviously such estimates on the spot depend on noisy factors, and studies show gains just from averaging estimates one makes at different times and so forth), etc?
In either case, if you were compelled to offer betting odds on thousands of independent claims like that (without knowledge of which side of the bet you’d have to take, and otherwise structured to make giving your best estimate the winning strategy) how would you do it?
As an aside, Yvain’s post on probability estimation seems relevant here.
The second.
Specifically, I am of the opinion that it is well-demonstrated that calculating adverse consequences of social policy is both sufficiently complicated and sufficiently subject to priming and biases that it is beyond human capacity at this time to accurately estimate whether the well-intentioned efforts of the Nuclear Threat Initiative are more likely to reduce or increase the risk of global thermonuclear war.
If I were forced to take a bet on the issue, I would set the odds at perfectly even. Not because I expect that a full and complete analysis by, say, Omega would come up with the probability being even, but because I have no ability to predict whether Omega would find that the Nuclear Threat initiative reduces or increases the chance of global thermonuclear war.
Currently I don’t think existential risk charities are very appropriate for small-scale individual donations, because of the difficulty of evaluating them. I feel that donating to a long-term research charity is a recipe for either analysis-paralysis or a decision that’s ultimately arbitrary. I’ll definitely continue gathering information, and see whether I can raise my confidence in an existential risk charity enough to consider donating. I think it will take a lot of research.
For any systemic risk charity, you can give a kind of “Drake equation” that arrives at an estimated dollar-per-life based on a sequence of probability estimates. Off the top of my head, I think the global population estimate for 2050 is around 8 billion, assuming the current trend in reducing the number of people in extreme poverty continues (reducing extreme poverty reduces pop. growth). That means you have to arrive at a probability greater than 8,000,000:1 to get a cost-per-life estimate of under $1,000.
At first glance that odds ratio looks pretty generous. But it’s very difficult to have any kind of confidence in the calculation that leads to that. How do I decide between estimates of 10^-3 and 10^-5 likelihood? They’re both too small for me to evaluate informally, and there’s two orders of magnitude difference there. Is there a page where you lay out these estimates? I’ve kind of assumed that this existed, but haven’t seen it yet.
The above calculation seems to be considering only current people, and not much valuing additional years of happy life for current people or better lives relative to current Western standards. Nick Bostrom’s Astronomical Waste paper discusses those issues. Time-discounting isn’t enough to wipe out the effect either, since populations may expand very quickly (e.g. brain emulations/artificial wombs and AI teachers).
Gaverick Matheny’s paper “Reducing the Risk of Human Extinction” is also relevant, although it arbitrarily caps various things (like the rate of population growth) to limit the dominance of the future.
If you care about bringing future people into being, then the expected future population if we avoid existential risk is many, many orders of magnitude greater than the current population of the world and looms very large.
If you don’t care about future people then you have to grapple with the Nonidentity Problem:
Separately, there seems to be a typo in this paragraph of your post:
If you mean “what reduction in the probability of (immediate) extinction is equivalent in expected-lives—of-currently-living-people to saving one life today” then that will be near 1 in 8 billion, not 1 in 8 million. That figure is also a slight underestimate if you only care about curent people because medium-term catastrophes would kill future people who don’t yet exist and many current people may have died by then.
Also, if you’re looking for easier-to-evaluate charities, or bigger higher-status ones endorsed by folk such as Warren Buffett, foreign policy elites, etc, I suggest the Nuclear Threat Initiative as an existence proof of the possibility of spending on x-risk reduction. I wouldn’t recommend giving to it in particular, but it does point to the feasibility of meaningful action. Also see Martin Hellman’s on reducing nuclear risk.
Are nukes really an x-risk?
Giving What We Can is another organization that tries hard to figure out what the most efficient charities are. Their list of recommendations is here.
There is a great deal to be said for giving to meta-charities like GiveWell or GWWC over directly giving to eg VillageReach: by sending an explicit message that you wish to give efficiently, it gives an incentive to charities to go for the efficiency dollar and it encourages others to give efficiently and to motivate efficient giving.
That said, I also believe that existential risk mitigation is a more efficient use of your donation.
Why does this post have so few upvotes? The author is committing money to make the world a better place, is being strategic about it, is putting forth a good argument for donating to Givewell directly in place of Givewell’s recommended charities, and is starting useful conversation. I wish I could upvote it more than once.
Discussion is something of a ghetto. None of its comments or posts are linked in the sidebar, which directs a lot of LW traffic. I only saw this because I specifically subscribe the Discussion RSS feed.
If this were switched to the main article area with the concomitant traffic (to say nothing of the front page itself), I’d be very surprised if it wound up after a month with under 20 net upvotes.
It had 2 upvotes (one of them from me) when I typed the above, despite having been posted for most of the time it has currently been up and having attracted many comments. Low traffic wasn’t the problem; I’d been wondering if it was because folks disagreed with the poster about where to donate, or what.
Nothing strange about it. My last Discussion post has a net of 5 upvotes (1/2 this) - and 57 comments (~3x more), very few of which point to any issue in the linked material which I had written.
Probably just because it turns out there have been other recent discussions of this, as TheOtherDave pointed out below. Maybe I should’ve looked more carefully through Discussion.
What industry are you in? It might be highly efficient to form a group of folks interested in optimal philanthropy, who work where you work; then you could: (1) do further research and background reading, with them; and perhaps more importantly (2) get other folks curious about high-efficiency charities, and committed to thinking it through and potentially donating.
I believe some others have had success with such study groups, although I don’t remember the details.
I’m in academia; specifically I’m a post-doc working on computational linguistics. So I mostly have contact with PhD students and other academics. PhD students are poor, but that can actually be a good thing here. It means you can convince them that they ought to donate later, while they spend four years or so learning to live on little money. Then they go off and work at Google, or some such.
I do plan to start talking about my tithing, slowly, as it comes up in conversation. If everybody who decided to give rationally influenced at least one other person to do so too, the idea grows virally. But, gently, gently. Nobody’s convinced by brow-beating moralising.
In academia, especially, forming a group that thinks through the impacts of different charities (and that potentially helps you decide where to donate) might allow you to get others engaged in a manner that feels more like recognizing their status/brains, and asking for intellectual help they might like to give, than like moralizing.
That sounds like a good idea. I’ll keep it in mind, although I’m not exactly in academia.
There’s a group of us in Boston doing something similar (not formal study, but dinner and discussion on how best to give). If anyone reading this is in the area and wants to come to future such dinners, write me.
Another group people should check out is Giving What We Can, which combines tithing with cost-effectiveness analysis, focusing on health charities at the moment. GWWC was founded by Toby Ord, who used to comment here Some charities appear highly on both GWWC and GW’s recommendations (Stop TB partnership and Against Malaria Foundation) but they do disagree on some. There are discussions between Will Crouch (of GWWC) and Holden (of GW) in the comments here and here.
Points of disagreement between the two seem to include the usefulness of DALYs (Disability Adjusted Life Years) as a metric (see the first link) and the focus on openness; GW seems to basically assume charities are guilty of inefficiency until proven otherwise, where GWWC basically assumes they’re average for their field.
Unfortunately Mandatory but Probably Unnecessary Disclaimer: While obviously not an employee, and nor a pledgee, I have some connection with GWWC. All comments here represent my own views, should not be construed as representing GWWC, etc. etc.
If you are going to exclude SIAI (and related such as the Future of Humanity Institute), SENS is likely the most efficient in terms of person years saved, especially if weighting by quality of life.
Generally, I think it is far more effective to contribute to global common goods than to help individual poor people.
Your “global common goods” are just going to help “individual rich people” once they’re developed, and they’re almost certainly going to cost more per-person than the interventions that top charities are working to deploy.
With global common goods that we already have we can raise the life expectancy and quality of life of a substantial percentage of the world’s population by 30 years. Instead, you want to work on technologies that hope to increase the life expectancy and quality of life of a different portion of the world’s population by 30 years, at far greater cost. Why do you think that is? Do you think it might be because that technology is applied to the subset of the population you happen to be part of?
I have a hard time believing that SENS will be short of funding. It has the ultimate pitch for rich donors: we can make [i]your[/i] life longer. How could that be underfunded, especially as the technology gets closer to market?
I tend to think GiveWell beats SENS in terms of adding expected lifespan to the lives of current people, because of the incredibly low cost of reducing 3rd world infectious disease (and a culture of efficient charity will improve aging research too), but it’s not entirely obvious to me. Some points that make aging research look more plausible:
-China and India are aging, and if trends continue for another decade the desperately poor will be concentrated almost wholly in Africa, a smallish minority of world population, with the great majority of deaths owing to diseases of aging and wealth; surprisingly, in the Copenhagen Consensus’s recent reports heart disease treatment in developing countries has ranked highly for cost-effectiveness
-if you develop aging therapies that work, developed countries will pay for the production themselves; if you develop new treatments for the desperately poor, you also need charity to pay for production
--pushing forward effective therapies for aging forward by a day could avert 100,000+ deaths, which would be worth $100 million at $1,000/life
-SENS is supposed to be in part a way to kickstart/lobby for research with various feasibility proofs, mobilizing a much larger pool of funds, similar to GiveWell, and so could plausibly compound to push research forward by one or more days
-the Gates Foundation has shifted the ratio of research effort going into 3rd world diseases vs aging (the NIH institute of aging is relatively small, anyway) to put them within an order of magnitude of each other
-humanitarian foreign aid budgets greatly exceed aging-specific research budgets
-there are real taboos and social barriers to openly calling for research aimed at aging, creating a plausible niche
-SENS might actually reduce the pace of aging research by attracting backlash (I doubt this, having an “extreme” flank for triangulation often helps to legitimize less extreme arguments)
I expect the treatment would be affordable to the working class, once economies of scale are established.
True, though the costs will be offset by by a dramatic decrease in cost of health care related to coping with the diseases of aging. It’s not clear to me what the net cost would be.
A 30 year increase in life span would be a low-end consolation prize if SENS doesn’t completely succeed. The actual stakes are potentially unbounded life span.
That is a nice theory, but we can observe if rich people are funding SENS and they aren’t. Instead Bill Gates and Warren Buffet are donating to typical “help the poor” charities. Possible explanations are that rich people, like other people generally, don’t think that indefinite lifespan is possible or desirable. See also this explanation of similar issues in cryonics, some of which generalizes.
Maybe the situation will change as the technology gets closer to market. But it doesn’t make sense to make current decisions as if that hypothetical future is occurring now.
That’s a good point. Aging is very expensive, so I’d hope the interventions would eventually be at least cost-neutral.
The “potentially unbounded life span” part is so far off we can’t reliably estimate when it might be achieved. I’d guess that’s also why they’re having trouble getting funding for it.
The poverty we have today is a massive inefficiency that we can solve, and be better off for having done so. We can have a larger, better global economy, generating more surplus for this kind of research. Or we can continue to let a great portion of the world’s population suffer, and let money continue to be wasted on ineffective interventions.
Yes, as I mentioned in the last paragraph here, I take a more favorable view of “helping poor people” charities that actually achieve this. Validating this requires more than just counting lives saved, though.
Right. Well I think that’s our critical differing assumption.
My view of these charities would be different if I didn’t think the gains appreciated. I’d donate to a science or technology endeavour if I thought that the next generation would be exactly the same, and all a donation could do was provide an anaesthetic.
The general opinion seems to be that foreign aid has saved lives from disease, holding per capita income fixed, and gains in life expectancy and infant survival have probably increased total (but not per capita) GDP through larger populations, but there is very little evidence of a positive effect on GDP per capita, i.e. reducing poverty.
Analyses tend to find weak effects of aid, which disappear when replicated on new datasets, and effect sizes tend to shrink as sample size and data quality increase. The literature also shows the usual signs of data mining and publication bias like spikes around the significance threshold, disproportionate reporting of positive results, etc. See this article, for instance.
The Gates Foundation, GiveWell, and many others seem to buy the basic model that public goods (agricultural and medical research for Gates, vaccines) work, and public health can generate welfare/save lives, but are not great for economic development.
I don’t want to claim that we now understand how to do aid without making mistakes. But I do want to say that decades of bad aid have not accomplished as much as they might have if they were actually designed to help the poor (rather than win the Cold War, support American farmers, or other political goals.) I think it’s reasonable to expect that evidence-based aid will do better than aid as a whole has so far.
It seems that knowing if and to what extent a charity helps people become productive participants in the economy would be valuable to you. As near as I can tell, GiveWell does not rate charities on this criteria. As it sounds like you are planning to donate a substantial amount, you should consider contacting them about making an earmarked donation (I don’t know if they actually do this, but it is worth trying) for a research project to establish such ratings.
GiveWell has an economic empowerment category, but says that the efficacy and transparency of interventions and charities in that sector are too poor to recommend anything in the category in comparison with public health.
What are the causes of your belief?
Regarding efficiency of SENS, it has a huge potential payoff (indefinite lifespan, barring accidents, with good health) with reasonable probability of success, and large payoffs (extended lifespan and good health) for partial/in progress success. Aubrey de Grey impresses me as a goal-driven rationalist who could pull this off, with his ability to break the problem into pieces, find actionable approaches, and engage the wider scientific community to make progress.
Regarding global common goods versus helping individuals, I would be underestimating the benefit of common goods by multiplying the benefit to an individual by 7 billion, since that doesn’t account for humans born in the future.
I would view helping individuals more favorably if we could reliably help them sufficiently that they be able to pay it forward, and actually do so, at a super critical rate.
As the poster notes, Givewell is an attempt to change global institutions around philanthropy, not just to improve life for individuals:
If GiveWell really does influence a substantial amount of philanthropy, then I would consider it as a public good charity with the multiplier that implies. Is there data on its influence and projected influence?
I recall a while back that Vasser was talking with GiveWell about rating SIAI. Has anything come of that?
Yes. They posted a bunch of self-evaluation stats. It is a start toward the information you seek.
Yup; this is a recurring theme.
Congratulations on your new job!
That would exclude the SIAI. Does the SIAI publish any kind of progress report?
Yes, at least as newsletters. There’s also the blog.
I’d also be happy to describe our budget and what we might do with increased donations to anyone who’s seriously interested. And folks are welcome to visit us and see what we’re doing. But, yes, there’s room for increased use of progress metrics, measurement of said progress, etc.
I’m surprised this is not already written, as opposed to only being written on demand for someone ‘seriously interested’; wouldn’t that be a standard part of a donation appeal, ‘we’re spending your money efficiently in these ways and if we got more money, we could do those excellent things’?
How to you measure progress when finding out that you’ve made a mistake and need to dump a bunch of the work you’ve done is likely to be an important part of the task?
Good question. What do you think of how Givewell does it? (Because they do assess their own performance in accord with their overall emphasis on transparency and metrics, and they are also in the research business, so that they, like us, often need to backtrack and re-assess.)
I like the piece from Givewell, but they’re doing things which are much easier to measure.
My impression is that SIAI is at a stage where most of what can be measured is inputs (money raised, hours worked) rather than outputs, and it’s hard to tell whether an output (a new piece of theory, for example) is actually getting closer to one’s goals.
I’m not saying that SIAI’s work is unimportant, but evaluating it may be more a matter of logic than measurement.
The title of this post presumes that SIAI is the best charity, which is a dubious claim at best.