Global poverty is too large a problem for any one person to solve, but each of us can still transform the lives of thousands of people. While it is difficult to help directly, we must not forget our most important advantage: on a world scale, we are very rich. We can thus pay for efficient services in health and education which, though desperately desired, are out of the reach of those in poverty. If the typical US citizen gave 10% of their income to the right NGOs, then each year they could:
* Distribute 700 mosquito nets, preventing 1,900 cases of malaria and 6 deaths
* Cure 170 people of tuberculosis, preventing 8 deaths
* Save 1,100 years worth of healthy life
* Provide 1,100 additional years of school attendance
The median personal income in the US is $35,500 (US Census 2008). Ten percent of this is $3,550.
Mosquito nets can be distributed for $5 each, cases of malaria prevented for $1.80, deaths from malaria prevented for $600 (see note 49 in this GiveWell summary).
Tuberculosis can be cured for $20, and deaths from TB prevented for $150-$750 (see the GiveWell page on the Stop-TB Partnership).
Disability Adjusted Life Years can be averted for as little as $3 each (see our page on neglected tropical diseases).
Treating children for neglected tropical diseases produces an extra year of school attendance for each $3 (see the J-PAL study , but note that this doesn’t include the possible need for extra teachers if more class members turn up).
Read through that list again and consider that each of us could each achieve one of these great benefits every single year. Read it through and try to imagine the scale of those numbers: to see the individual names and faces in your mind. Pick out one of these individuals and try to imagine the huge effect this will have on his or her life. It is just staggering. In a single week we can perform something like a miracle: saving a life, or restoring sight to the blind. Over our lives, we can each perform thousands of these ‘miracles’, leaving behind a remarkable legacy. Moreover, we can do all of this without leaving our countries, without leaving our preferred jobs, and without even giving up any parts of our lives that are truly important to us.
We clearly have a duty to do at least this much. We can do something of extreme moral importance without sacrificing anything of comparative value. How could we look these people in the eye and justify our failure to give even such a small amount? Isn’t this the least we could do?
Many people flee from these facts and try hard to forget them, but we needn’t do so. Instead, we can embrace the facts and simply decide to give generously. This is what the members of Giving What We Can have done. We’ve each made a public pledge to give at least 10% of our incomes to where we believe it will do the most to fight poverty. Whatever our incomes, we will all have a tremendous effect on thousands of lives. We don’t seek any praise for this as it seems to us to be the least we could do. What we do want is for others to join us in this endeavour: to share advice on the most effective ways to help, and to give what we can.
(if chosen for the prize, I will donate half to GivingWhatWeCan.org and half to Deworming the World.)
EDIT: Per the request of someone who appears to be heavily involved with GWWC, if chosen for the prize, I will donate the entire prize to Deworming the World.
It’s worth mentioning that contra our intuitions, there are strong reasons for donating to a single charity rather than multiple charities. Unless you donate a lot of money you’re not going to affect the marginal good of donating to that charity. This means that whatever you think has the highest expected marginal good then it will remain so and you should donate only to that charity until your expectation changes. You also should not be risk averse when it comes to charity; since the planets population is quite large, a 50% chance of saving 2 lives is just as good as a 100% chance of saving 1 life.
Er, I meant my comment about what you plant to do with the prize, not as a comment on your essay.
Unless you donate a lot of money you’re not going to affect the marginal good of donating to that charity. This means that whatever you think has the highest expected marginal good then it will remain so.
Er, yeah, of course. That’s totally right.
You also should not be risk averse when it comes to charity; since the planets population is quite large.
I’m not sure that conclusion follows as strongly as you think it does. Given a choice between an exactly 10% chance of saving exactly 11 lives and a perfect guarantee of saving 1 life, I guess I’d go with .10 * 11 = 1.1 expected lives. But given a variety of charities with uncertain effectiveness and uncertain error bars, it seems sensible to diversify my portfolio. I could be objectively wrong about whether Deworming the World is better than the best water-treatment charity, but it’s unlikely that any one charity or combination of charities is significantly and knowably better than a small basket of some of the best charities in different categories. In short, I think there’s a difference between diversifiying and hedging—you don’t need to hedge with individual charities, but you should still diversify.
Er, I meant my comment about what you plant to do with the prize, not as a comment on your essay.
Well, it’s not my essay—it’s Giving What We Can’s. I just plagiarized it, on the theory that they would see it as good publicity and wouldn’t mind. I’m giving them half by way of guilt money, and in case they can think of something better to do with the money. If not, they can just pass their $50 on to Deworming the World.
It’s important to understand why you normally diversify. The reason why diversification is a good idea is because you have diminishing marginal utility of wealth. This reason just doesn’t apply to charity. Your contributions make such a small difference that even if “total utility” had diminishing returns to live saved (debatable), over your range of influence it has effectively constant returns.
I agree that it’s emotionally attractive to diversify, but it’s simply incorrect to do so (though if your choices are of similar quality, it’s not the worst thing in the world).
I’m beginning to think that LW needs a series on finance. A lot of what seems intuitive to me, doesn’t seem intuitive to others, and I think standard finance has several valuable insights like this.
So, I know it’s wise to purchase warm fuzzies and utilons separately, but it just so happens that I get a significant quantity of warm fuzzies from saving hundreds of lives. I’m weird like that.
Anyway, suppose (against all evidence) that utilities are ordinally intercomparable. Suppose further that the relevant chunk of my utility function is U(charity) = U(fuzzies) + U(altruism), where U(fuzzies) = ln(# of lives saved), and U(altruism) = (net utility of saved life to owner) * (my discount rate for the utility of strangers). Let’s say the typical life saved by charities is worth 30,000 utilons to its owner, and that my discount rate for strangers’ utility is 1⁄100,000.
So, if I save 200 lives, I get ln(200) + (30,000 200 / 100,000) = 65 utilons for me. If I save 2,000 lives, I get ln(2000) + (30,000 2,000 / 100,000) = 607 utilons for me. My original point was going to be that I do get diminishing marginal returns to charity, but apparently given my assumptions they diminish so slowly as to be practically constant, and so I will shut up and pick just one charity in so far as I can find the willpower to do so.
Hooray for accidentally proving yourself wrong with back of the envelope calculations.
I’d love a discussion of finance accessible to a smart but finance-phobic lay audience (raises hand).
For example, I too have been under the impression that I diversify my investments, not because I’m concerned that my investible income will drive some specific company over an inflection point in its utility function, but because it makes me less vulnerable to a single point of failure. ( Indeed, I’m still under that impression; the alternative seems absurd on the face of it. )
I think the point is that investments pay a return to you, so a single point of failure really hurts you where it counts.
Charities, on the other hand, pay their return to the world. The world is not horribly damaged if a single charity fails; the world is served by many charities. In effect, the world is already diversified.
If the charity you gave all your donations to fails, you may feel bad, but you will get over it. Not necessarily the case if the investment you sunk all your own money into fails.
Sure, I get that. The thing I was responding to was jsalvatier’s comment that “It’s important to understand why you normally diversify. The reason why diversification is a good idea is because you have diminishing marginal utility of wealth.” S/he wasn’t talking about charity there, I don’t think… though maybe I was confused about that.
Two investment strategies, S1 & S2. They have the same average expected ROI, but S1 involves investing all my money in a single highly speculative company with a wider expected variance… my investment might go up or down by an order of magnitude. So, S1 suffers from a single point of failure relative to S2.
You’re saying that I could just as readily express this by saying “S1 involves diminishing marginal utility of wealth relative to S2”… yes?
Huh. I conclude that I haven’t been understanding this conversation from the git-go. In my defense, I did describe myself as finance-phobic to begin with.
No—what’s under question is the scaling behavior of your own utility function wrt money; if you exhibit diminishing marginal utility of wealth, that means you want to avoid S1.
Only if the relevant utility function is sublinear. If the relevant utility function is linear or superlinear, you aren’t risk averse, so you don’t care about a single point of failure. You care about p(failure)x0 + p(success)xU(success).
E.g. assuming altruism is all you care about, if you could pay $10,000 for a bet which would cure all the problems in Africa with probability 0.01%, and do nothing with probability 99.99%, then you should take that bet.
So, the project “global charity” doesn’t have a single point of failure even if all individuals choose exactly one charity each.
But I’m not sure that I’m permitted to take the global point of view—after all, I only control my own actions. From my personal vantage point, I care about charity and I care about preserving my own solvency. To secure each of these values, I should avoid allowing my plan for achieving either value to suffer from a single point of failure, no?
Right, you should make sure your plan for personal solvency doesn’t have a single point of failure. As for global charity, do you really have a plan for that? My model had been that you are simply contributing to the support of some (possibly singleton) collection of plans. With the objective of maximizing expected good. If the true goal is something different—something like minimizing the chance that you have done no good at all and hence miss out on the warm and fuzzies—then, by all means, spread your charity dollar around.
Has anyone worked out numbers (according to whatever axioms) for what the best donor behaviour is for charities to encourage? Will they do better if they encourage people to give all their money to one charity, or to split it amongst several?
Simply what would work out best for the charities rather than the donors. Would it be in the best interests of a given charity to have donors interested in giving all their charity money to one charity, or to have donors split it between a bunch of charities?
Can’t say I’ve seen specific models. I imagine it depends on what the utility functions of charities are. If the charities are run by people just looking to keep their jobs/get status, I’d guess they want people to donate to them rather than other charities on the margin, and that would lead to people donating to lots of charities. If charities just want to help make the world better then that implies that they want people to donate to the single best charitiy. Since there are many charities in the world, that supports theory 1.
I don’t think it depends on the motivations of those in the charities, based on my views of the insides of a few—some of which had succumbed thoroughly to the Iron Law of Institutions (where the people working there didn’t believe any more), some of which were pretty solidly oriented to their stated purpose, and some of which were in between (some recoverable, some not).
I think we can reasonably just assume, without making this question meaninglessly hypothetical, that the charities want money and we don’t need to go more deeply into it for the question to be applicable to the real world—and I am indeed asking because I am interested in the real world applications of this question:
What is the best giving strategy for charities to encourage: for people to spread their donations, or for people to give their year’s charity spend to a single charity?
(For a start, I would guess this would vary with size and fame of charity. Large charities would be more confident of being the winner, small charities would be more pleased to get anything at all. Then there is the fact that not only are the donors not independent actors, the donors and charities aren’t either. Can a donor and a charity be said to be conspiring to achieve their aim? I think they can. This complicates things, though I’m not sure if it’s enough to make a difference.)
[And I would be flatly amazed if there wasn’t considerable study on this subject already, as there have been enough large charities for long enough who would be considerably interested in the topic that anything claiming to be a radical breakthrough in thinking on the subject from outside the charity field will need to be assessed in the context of existing work, rather than being regarded as a completely new idea in an unexplored field. As I recall, there was no mention of any past work in this area at all. Is there actually none, or did the authors just not look?]
Sorry, I’d like to, but I’m running on flawed hardware. The uncertainty distribution is itself plagued by error bars, and so on all the way down. As Dirty Harry would put it, “A man’s gotta know his limitations.” Or, if you prefer Sherlock Holmes, “Data, data, data! I cannot make bricks without clay.”
I’d guess that your true rejection might be wanting to avoid the emotional pain of failure if you stake all $ on one particularly good-looking charity which then goes on to be exposed as a fraud.
Or possibly your true rejection is the emotional hit you’d take from worrying about whether you got it wrong.
There are many non-rational reasons people have for placing a certainty premium on charity.
I’d probably actually diversify since it seems like a positive sum game between the egoist in me and the altruist in me. The egoist more wants actual, real status/reward, which tends to only be gained when you pick an actual winner. People don’t give you any praise for anything other than actual, real successes, I find. And if the expected marginal utilities of the top 5 causes are comparable (the same to within a factor of 2), the altruist isn’t actually conceding very much.
It sounds like meta -errors are not your true rejection. I’d guess that your true rejection might be wanting to avoid the emotional pain of failure if you stake all $ on one particularly good-looking charity which then goes on to be exposed as a fraud.
That’s a pretty good guess. Probably correct. I wonder, though, how many people manage to care about charity so directly as to value saving lives literally for the sake of saving lives, rather than for the emotional satisfaction associated with it. I think the odds of me suffering from plague, reincarnation, violent uprising, etc. that is partly caused by me donating to a slightly suboptimal basket of charities are basically negligible. What, then, is the moral or philosophical theory that says that I should privilege the act of donating my whole charity budget to one maximally efficient charity over the emotional satisfaction of donating to a basket of moderately efficient charities? I enjoy the latter more; I know because I have tried each method a few times in different years. Why should I personally do something that I enjoy less? I don’t mean to be triumphant about this; possibly there is a very good reason why I should do something that I enjoy less. I just don’t know what it is. And don’t say something blunt like “it’ll save more lives.” I know it will save more lives on average, and I’ve noticed that I don’t care. Should I work to change this about myself, and if so, why?
I see it as a “deal” between an egoist subagent and an altruist subagent.
The crucially important factor in this deal is just what the effectiveness ratio is between charity #1 and charities#2, #3, #4, #5, #6. If the marginal good done per $ is similar between all of them, then OK go ahead and diversify.
All right, well, let’s consider the least convenient example. Suppose the estimated marginal good between charity #1 and #6 is off by a factor of 8 -- enough to horrify the altruist, but barely enough for the egoist (who primarily likes to think that he’s being useful on lots of the most important problems) to even notice.
What can I tell the moderator subagent that would make him want to side in favor of the altruist subagent?
Well instead of spreading the money between all 6 charities, why not reduce your donation by 50% but donate all of it to #1, and then give the remaining 50% to the egoist subagent to buy something nice with?
It’s good thinking, but this particular egoist primarily likes to think that he’s being useful on lots of apparently important problems. He can’t be bribed with ordinary status symbols like fancy watches. Is there a way to spend money to trick yourself into thinking you’re useful? None immediately springs to mind, but I guess there might be one or two.
Which actually isn’t all that irrational if we think of it as a decision theory problem with diminishing returns on money—making sure that at least some of your money is used well becomes more important than gambling that all of it is used well.
Of course, given the nature of what’s being dome with the money, the returns diminish much, much more slowly than we’re used to; diversity shouldn’t be a concern until you’re Onassis-ish rich.
But in the end, there is an “all the way down,” that being the information you started from. So the problem is solvable by good ol’ statistics, usually with there being a single best option.
Global poverty is too large a problem for any one person to solve, but each of us can still transform the lives of thousands of people. While it is difficult to help directly, we must not forget our most important advantage: on a world scale, we are very rich. We can thus pay for efficient services in health and education which, though desperately desired, are out of the reach of those in poverty. If the typical US citizen gave 10% of their income to the right NGOs, then each year they could:
The median personal income in the US is $35,500 (US Census 2008). Ten percent of this is $3,550.
Mosquito nets can be distributed for $5 each, cases of malaria prevented for $1.80, deaths from malaria prevented for $600 (see note 49 in this GiveWell summary).
Tuberculosis can be cured for $20, and deaths from TB prevented for $150-$750 (see the GiveWell page on the Stop-TB Partnership).
Disability Adjusted Life Years can be averted for as little as $3 each (see our page on neglected tropical diseases).
Treating children for neglected tropical diseases produces an extra year of school attendance for each $3 (see the J-PAL study , but note that this doesn’t include the possible need for extra teachers if more class members turn up).
Read through that list again and consider that each of us could each achieve one of these great benefits every single year. Read it through and try to imagine the scale of those numbers: to see the individual names and faces in your mind. Pick out one of these individuals and try to imagine the huge effect this will have on his or her life. It is just staggering. In a single week we can perform something like a miracle: saving a life, or restoring sight to the blind. Over our lives, we can each perform thousands of these ‘miracles’, leaving behind a remarkable legacy. Moreover, we can do all of this without leaving our countries, without leaving our preferred jobs, and without even giving up any parts of our lives that are truly important to us.
We clearly have a duty to do at least this much. We can do something of extreme moral importance without sacrificing anything of comparative value. How could we look these people in the eye and justify our failure to give even such a small amount? Isn’t this the least we could do?
Many people flee from these facts and try hard to forget them, but we needn’t do so. Instead, we can embrace the facts and simply decide to give generously. This is what the members of Giving What We Can have done. We’ve each made a public pledge to give at least 10% of our incomes to where we believe it will do the most to fight poverty. Whatever our incomes, we will all have a tremendous effect on thousands of lives. We don’t seek any praise for this as it seems to us to be the least we could do. What we do want is for others to join us in this endeavour: to share advice on the most effective ways to help, and to give what we can.
~GivingWhatWeCan.org
(if chosen for the prize, I will donate half to GivingWhatWeCan.org and half to Deworming the World.)
EDIT: Per the request of someone who appears to be heavily involved with GWWC, if chosen for the prize, I will donate the entire prize to Deworming the World.
It’s worth mentioning that contra our intuitions, there are strong reasons for donating to a single charity rather than multiple charities. Unless you donate a lot of money you’re not going to affect the marginal good of donating to that charity. This means that whatever you think has the highest expected marginal good then it will remain so and you should donate only to that charity until your expectation changes. You also should not be risk averse when it comes to charity; since the planets population is quite large, a 50% chance of saving 2 lives is just as good as a 100% chance of saving 1 life.
Er, I meant my comment about what you plant to do with the prize, not as a comment on your essay.
Er, yeah, of course. That’s totally right.
I’m not sure that conclusion follows as strongly as you think it does. Given a choice between an exactly 10% chance of saving exactly 11 lives and a perfect guarantee of saving 1 life, I guess I’d go with .10 * 11 = 1.1 expected lives. But given a variety of charities with uncertain effectiveness and uncertain error bars, it seems sensible to diversify my portfolio. I could be objectively wrong about whether Deworming the World is better than the best water-treatment charity, but it’s unlikely that any one charity or combination of charities is significantly and knowably better than a small basket of some of the best charities in different categories. In short, I think there’s a difference between diversifiying and hedging—you don’t need to hedge with individual charities, but you should still diversify.
Well, it’s not my essay—it’s Giving What We Can’s. I just plagiarized it, on the theory that they would see it as good publicity and wouldn’t mind. I’m giving them half by way of guilt money, and in case they can think of something better to do with the money. If not, they can just pass their $50 on to Deworming the World.
It’s important to understand why you normally diversify. The reason why diversification is a good idea is because you have diminishing marginal utility of wealth. This reason just doesn’t apply to charity. Your contributions make such a small difference that even if “total utility” had diminishing returns to live saved (debatable), over your range of influence it has effectively constant returns.
I agree that it’s emotionally attractive to diversify, but it’s simply incorrect to do so (though if your choices are of similar quality, it’s not the worst thing in the world).
I’m beginning to think that LW needs a series on finance. A lot of what seems intuitive to me, doesn’t seem intuitive to others, and I think standard finance has several valuable insights like this.
So, I know it’s wise to purchase warm fuzzies and utilons separately, but it just so happens that I get a significant quantity of warm fuzzies from saving hundreds of lives. I’m weird like that.
Anyway, suppose (against all evidence) that utilities are ordinally intercomparable. Suppose further that the relevant chunk of my utility function is U(charity) = U(fuzzies) + U(altruism), where U(fuzzies) = ln(# of lives saved), and U(altruism) = (net utility of saved life to owner) * (my discount rate for the utility of strangers). Let’s say the typical life saved by charities is worth 30,000 utilons to its owner, and that my discount rate for strangers’ utility is 1⁄100,000.
So, if I save 200 lives, I get ln(200) + (30,000 200 / 100,000) = 65 utilons for me. If I save 2,000 lives, I get ln(2000) + (30,000 2,000 / 100,000) = 607 utilons for me. My original point was going to be that I do get diminishing marginal returns to charity, but apparently given my assumptions they diminish so slowly as to be practically constant, and so I will shut up and pick just one charity in so far as I can find the willpower to do so.
Hooray for accidentally proving yourself wrong with back of the envelope calculations.
I’d love a discussion of finance accessible to a smart but finance-phobic lay audience (raises hand).
For example, I too have been under the impression that I diversify my investments, not because I’m concerned that my investible income will drive some specific company over an inflection point in its utility function, but because it makes me less vulnerable to a single point of failure. ( Indeed, I’m still under that impression; the alternative seems absurd on the face of it. )
I think the point is that investments pay a return to you, so a single point of failure really hurts you where it counts.
Charities, on the other hand, pay their return to the world. The world is not horribly damaged if a single charity fails; the world is served by many charities. In effect, the world is already diversified.
If the charity you gave all your donations to fails, you may feel bad, but you will get over it. Not necessarily the case if the investment you sunk all your own money into fails.
Sure, I get that. The thing I was responding to was jsalvatier’s comment that “It’s important to understand why you normally diversify. The reason why diversification is a good idea is because you have diminishing marginal utility of wealth.” S/he wasn’t talking about charity there, I don’t think… though maybe I was confused about that.
“Diminishing marginal utility of wealth” means the same thing as “don’t want to be exposed to a single point of failure”.
Yes, I think we do need a series on econ/finance.
(blink)
Two investment strategies, S1 & S2. They have the same average expected ROI, but S1 involves investing all my money in a single highly speculative company with a wider expected variance… my investment might go up or down by an order of magnitude. So, S1 suffers from a single point of failure relative to S2.
You’re saying that I could just as readily express this by saying “S1 involves diminishing marginal utility of wealth relative to S2”… yes?
Huh. I conclude that I haven’t been understanding this conversation from the git-go. In my defense, I did describe myself as finance-phobic to begin with.
No—what’s under question is the scaling behavior of your own utility function wrt money; if you exhibit diminishing marginal utility of wealth, that means you want to avoid S1.
Isn’t anything worth doing worth troubleshooting for a single point of failure?
Only if the relevant utility function is sublinear. If the relevant utility function is linear or superlinear, you aren’t risk averse, so you don’t care about a single point of failure. You care about p(failure)x0 + p(success)xU(success).
E.g. assuming altruism is all you care about, if you could pay $10,000 for a bet which would cure all the problems in Africa with probability 0.01%, and do nothing with probability 99.99%, then you should take that bet.
?
So, the project “global charity” doesn’t have a single point of failure even if all individuals choose exactly one charity each.
But I’m not sure that I’m permitted to take the global point of view—after all, I only control my own actions. From my personal vantage point, I care about charity and I care about preserving my own solvency. To secure each of these values, I should avoid allowing my plan for achieving either value to suffer from a single point of failure, no?
No.
Right, you should make sure your plan for personal solvency doesn’t have a single point of failure. As for global charity, do you really have a plan for that? My model had been that you are simply contributing to the support of some (possibly singleton) collection of plans. With the objective of maximizing expected good. If the true goal is something different—something like minimizing the chance that you have done no good at all and hence miss out on the warm and fuzzies—then, by all means, spread your charity dollar around.
Has anyone worked out numbers (according to whatever axioms) for what the best donor behaviour is for charities to encourage? Will they do better if they encourage people to give all their money to one charity, or to split it amongst several?
I’m not sure how you’d design a model for “encouragement” and its effects. Care to elaborate on what sort of model you’re thinking about?
Simply what would work out best for the charities rather than the donors. Would it be in the best interests of a given charity to have donors interested in giving all their charity money to one charity, or to have donors split it between a bunch of charities?
Can’t say I’ve seen specific models. I imagine it depends on what the utility functions of charities are. If the charities are run by people just looking to keep their jobs/get status, I’d guess they want people to donate to them rather than other charities on the margin, and that would lead to people donating to lots of charities. If charities just want to help make the world better then that implies that they want people to donate to the single best charitiy. Since there are many charities in the world, that supports theory 1.
I don’t think it depends on the motivations of those in the charities, based on my views of the insides of a few—some of which had succumbed thoroughly to the Iron Law of Institutions (where the people working there didn’t believe any more), some of which were pretty solidly oriented to their stated purpose, and some of which were in between (some recoverable, some not).
I think we can reasonably just assume, without making this question meaninglessly hypothetical, that the charities want money and we don’t need to go more deeply into it for the question to be applicable to the real world—and I am indeed asking because I am interested in the real world applications of this question:
What is the best giving strategy for charities to encourage: for people to spread their donations, or for people to give their year’s charity spend to a single charity?
(For a start, I would guess this would vary with size and fame of charity. Large charities would be more confident of being the winner, small charities would be more pleased to get anything at all. Then there is the fact that not only are the donors not independent actors, the donors and charities aren’t either. Can a donor and a charity be said to be conspiring to achieve their aim? I think they can. This complicates things, though I’m not sure if it’s enough to make a difference.)
[And I would be flatly amazed if there wasn’t considerable study on this subject already, as there have been enough large charities for long enough who would be considerably interested in the topic that anything claiming to be a radical breakthrough in thinking on the subject from outside the charity field will need to be assessed in the context of existing work, rather than being regarded as a completely new idea in an unexplored field. As I recall, there was no mention of any past work in this area at all. Is there actually none, or did the authors just not look?]
no. Bayesianism. Integrate over the uncertainty distribution for the error bars.
Sorry, I’d like to, but I’m running on flawed hardware. The uncertainty distribution is itself plagued by error bars, and so on all the way down. As Dirty Harry would put it, “A man’s gotta know his limitations.” Or, if you prefer Sherlock Holmes, “Data, data, data! I cannot make bricks without clay.”
You can still integrate. I doubt that the meta-errors are really important differentiators between villagereach and Oxfam.
It sounds like meta -errors are not your true rejection
I’d guess that your true rejection might be wanting to avoid the emotional pain of failure if you stake all $ on one particularly good-looking charity which then goes on to be exposed as a fraud.
Or possibly your true rejection is the emotional hit you’d take from worrying about whether you got it wrong.
There are many non-rational reasons people have for placing a certainty premium on charity.
I agree. I share Mass_Driver’s emotional desire to diversify even though I know it’s wrong.
I’d probably actually diversify since it seems like a positive sum game between the egoist in me and the altruist in me. The egoist more wants actual, real status/reward, which tends to only be gained when you pick an actual winner. People don’t give you any praise for anything other than actual, real successes, I find. And if the expected marginal utilities of the top 5 causes are comparable (the same to within a factor of 2), the altruist isn’t actually conceding very much.
That’s a pretty good guess. Probably correct. I wonder, though, how many people manage to care about charity so directly as to value saving lives literally for the sake of saving lives, rather than for the emotional satisfaction associated with it. I think the odds of me suffering from plague, reincarnation, violent uprising, etc. that is partly caused by me donating to a slightly suboptimal basket of charities are basically negligible. What, then, is the moral or philosophical theory that says that I should privilege the act of donating my whole charity budget to one maximally efficient charity over the emotional satisfaction of donating to a basket of moderately efficient charities? I enjoy the latter more; I know because I have tried each method a few times in different years. Why should I personally do something that I enjoy less? I don’t mean to be triumphant about this; possibly there is a very good reason why I should do something that I enjoy less. I just don’t know what it is. And don’t say something blunt like “it’ll save more lives.” I know it will save more lives on average, and I’ve noticed that I don’t care. Should I work to change this about myself, and if so, why?
I see it as a “deal” between an egoist subagent and an altruist subagent.
The crucially important factor in this deal is just what the effectiveness ratio is between charity #1 and charities#2, #3, #4, #5, #6. If the marginal good done per $ is similar between all of them, then OK go ahead and diversify.
All right, well, let’s consider the least convenient example. Suppose the estimated marginal good between charity #1 and #6 is off by a factor of 8 -- enough to horrify the altruist, but barely enough for the egoist (who primarily likes to think that he’s being useful on lots of the most important problems) to even notice.
What can I tell the moderator subagent that would make him want to side in favor of the altruist subagent?
Well instead of spreading the money between all 6 charities, why not reduce your donation by 50% but donate all of it to #1, and then give the remaining 50% to the egoist subagent to buy something nice with?
It’s good thinking, but this particular egoist primarily likes to think that he’s being useful on lots of apparently important problems. He can’t be bribed with ordinary status symbols like fancy watches. Is there a way to spend money to trick yourself into thinking you’re useful? None immediately springs to mind, but I guess there might be one or two.
you could spend 90% of the money on cause1 and split the remaining 10% between the rest
Thank you.
Which actually isn’t all that irrational if we think of it as a decision theory problem with diminishing returns on money—making sure that at least some of your money is used well becomes more important than gambling that all of it is used well.
Of course, given the nature of what’s being dome with the money, the returns diminish much, much more slowly than we’re used to; diversity shouldn’t be a concern until you’re Onassis-ish rich.
As clearly stated above, for small donors marginal returns don’t diminish.
But in the end, there is an “all the way down,” that being the information you started from. So the problem is solvable by good ol’ statistics, usually with there being a single best option.
Giving What We Can does not accept donations. Just give it all to Deworm the World.
Okiedoke.