It’s important to understand why you normally diversify. The reason why diversification is a good idea is because you have diminishing marginal utility of wealth. This reason just doesn’t apply to charity. Your contributions make such a small difference that even if “total utility” had diminishing returns to live saved (debatable), over your range of influence it has effectively constant returns.
I agree that it’s emotionally attractive to diversify, but it’s simply incorrect to do so (though if your choices are of similar quality, it’s not the worst thing in the world).
I’m beginning to think that LW needs a series on finance. A lot of what seems intuitive to me, doesn’t seem intuitive to others, and I think standard finance has several valuable insights like this.
So, I know it’s wise to purchase warm fuzzies and utilons separately, but it just so happens that I get a significant quantity of warm fuzzies from saving hundreds of lives. I’m weird like that.
Anyway, suppose (against all evidence) that utilities are ordinally intercomparable. Suppose further that the relevant chunk of my utility function is U(charity) = U(fuzzies) + U(altruism), where U(fuzzies) = ln(# of lives saved), and U(altruism) = (net utility of saved life to owner) * (my discount rate for the utility of strangers). Let’s say the typical life saved by charities is worth 30,000 utilons to its owner, and that my discount rate for strangers’ utility is 1⁄100,000.
So, if I save 200 lives, I get ln(200) + (30,000 200 / 100,000) = 65 utilons for me. If I save 2,000 lives, I get ln(2000) + (30,000 2,000 / 100,000) = 607 utilons for me. My original point was going to be that I do get diminishing marginal returns to charity, but apparently given my assumptions they diminish so slowly as to be practically constant, and so I will shut up and pick just one charity in so far as I can find the willpower to do so.
Hooray for accidentally proving yourself wrong with back of the envelope calculations.
I’d love a discussion of finance accessible to a smart but finance-phobic lay audience (raises hand).
For example, I too have been under the impression that I diversify my investments, not because I’m concerned that my investible income will drive some specific company over an inflection point in its utility function, but because it makes me less vulnerable to a single point of failure. ( Indeed, I’m still under that impression; the alternative seems absurd on the face of it. )
I think the point is that investments pay a return to you, so a single point of failure really hurts you where it counts.
Charities, on the other hand, pay their return to the world. The world is not horribly damaged if a single charity fails; the world is served by many charities. In effect, the world is already diversified.
If the charity you gave all your donations to fails, you may feel bad, but you will get over it. Not necessarily the case if the investment you sunk all your own money into fails.
Sure, I get that. The thing I was responding to was jsalvatier’s comment that “It’s important to understand why you normally diversify. The reason why diversification is a good idea is because you have diminishing marginal utility of wealth.” S/he wasn’t talking about charity there, I don’t think… though maybe I was confused about that.
Two investment strategies, S1 & S2. They have the same average expected ROI, but S1 involves investing all my money in a single highly speculative company with a wider expected variance… my investment might go up or down by an order of magnitude. So, S1 suffers from a single point of failure relative to S2.
You’re saying that I could just as readily express this by saying “S1 involves diminishing marginal utility of wealth relative to S2”… yes?
Huh. I conclude that I haven’t been understanding this conversation from the git-go. In my defense, I did describe myself as finance-phobic to begin with.
No—what’s under question is the scaling behavior of your own utility function wrt money; if you exhibit diminishing marginal utility of wealth, that means you want to avoid S1.
Only if the relevant utility function is sublinear. If the relevant utility function is linear or superlinear, you aren’t risk averse, so you don’t care about a single point of failure. You care about p(failure)x0 + p(success)xU(success).
E.g. assuming altruism is all you care about, if you could pay $10,000 for a bet which would cure all the problems in Africa with probability 0.01%, and do nothing with probability 99.99%, then you should take that bet.
So, the project “global charity” doesn’t have a single point of failure even if all individuals choose exactly one charity each.
But I’m not sure that I’m permitted to take the global point of view—after all, I only control my own actions. From my personal vantage point, I care about charity and I care about preserving my own solvency. To secure each of these values, I should avoid allowing my plan for achieving either value to suffer from a single point of failure, no?
Right, you should make sure your plan for personal solvency doesn’t have a single point of failure. As for global charity, do you really have a plan for that? My model had been that you are simply contributing to the support of some (possibly singleton) collection of plans. With the objective of maximizing expected good. If the true goal is something different—something like minimizing the chance that you have done no good at all and hence miss out on the warm and fuzzies—then, by all means, spread your charity dollar around.
Has anyone worked out numbers (according to whatever axioms) for what the best donor behaviour is for charities to encourage? Will they do better if they encourage people to give all their money to one charity, or to split it amongst several?
Simply what would work out best for the charities rather than the donors. Would it be in the best interests of a given charity to have donors interested in giving all their charity money to one charity, or to have donors split it between a bunch of charities?
Can’t say I’ve seen specific models. I imagine it depends on what the utility functions of charities are. If the charities are run by people just looking to keep their jobs/get status, I’d guess they want people to donate to them rather than other charities on the margin, and that would lead to people donating to lots of charities. If charities just want to help make the world better then that implies that they want people to donate to the single best charitiy. Since there are many charities in the world, that supports theory 1.
I don’t think it depends on the motivations of those in the charities, based on my views of the insides of a few—some of which had succumbed thoroughly to the Iron Law of Institutions (where the people working there didn’t believe any more), some of which were pretty solidly oriented to their stated purpose, and some of which were in between (some recoverable, some not).
I think we can reasonably just assume, without making this question meaninglessly hypothetical, that the charities want money and we don’t need to go more deeply into it for the question to be applicable to the real world—and I am indeed asking because I am interested in the real world applications of this question:
What is the best giving strategy for charities to encourage: for people to spread their donations, or for people to give their year’s charity spend to a single charity?
(For a start, I would guess this would vary with size and fame of charity. Large charities would be more confident of being the winner, small charities would be more pleased to get anything at all. Then there is the fact that not only are the donors not independent actors, the donors and charities aren’t either. Can a donor and a charity be said to be conspiring to achieve their aim? I think they can. This complicates things, though I’m not sure if it’s enough to make a difference.)
[And I would be flatly amazed if there wasn’t considerable study on this subject already, as there have been enough large charities for long enough who would be considerably interested in the topic that anything claiming to be a radical breakthrough in thinking on the subject from outside the charity field will need to be assessed in the context of existing work, rather than being regarded as a completely new idea in an unexplored field. As I recall, there was no mention of any past work in this area at all. Is there actually none, or did the authors just not look?]
It’s important to understand why you normally diversify. The reason why diversification is a good idea is because you have diminishing marginal utility of wealth. This reason just doesn’t apply to charity. Your contributions make such a small difference that even if “total utility” had diminishing returns to live saved (debatable), over your range of influence it has effectively constant returns.
I agree that it’s emotionally attractive to diversify, but it’s simply incorrect to do so (though if your choices are of similar quality, it’s not the worst thing in the world).
I’m beginning to think that LW needs a series on finance. A lot of what seems intuitive to me, doesn’t seem intuitive to others, and I think standard finance has several valuable insights like this.
So, I know it’s wise to purchase warm fuzzies and utilons separately, but it just so happens that I get a significant quantity of warm fuzzies from saving hundreds of lives. I’m weird like that.
Anyway, suppose (against all evidence) that utilities are ordinally intercomparable. Suppose further that the relevant chunk of my utility function is U(charity) = U(fuzzies) + U(altruism), where U(fuzzies) = ln(# of lives saved), and U(altruism) = (net utility of saved life to owner) * (my discount rate for the utility of strangers). Let’s say the typical life saved by charities is worth 30,000 utilons to its owner, and that my discount rate for strangers’ utility is 1⁄100,000.
So, if I save 200 lives, I get ln(200) + (30,000 200 / 100,000) = 65 utilons for me. If I save 2,000 lives, I get ln(2000) + (30,000 2,000 / 100,000) = 607 utilons for me. My original point was going to be that I do get diminishing marginal returns to charity, but apparently given my assumptions they diminish so slowly as to be practically constant, and so I will shut up and pick just one charity in so far as I can find the willpower to do so.
Hooray for accidentally proving yourself wrong with back of the envelope calculations.
I’d love a discussion of finance accessible to a smart but finance-phobic lay audience (raises hand).
For example, I too have been under the impression that I diversify my investments, not because I’m concerned that my investible income will drive some specific company over an inflection point in its utility function, but because it makes me less vulnerable to a single point of failure. ( Indeed, I’m still under that impression; the alternative seems absurd on the face of it. )
I think the point is that investments pay a return to you, so a single point of failure really hurts you where it counts.
Charities, on the other hand, pay their return to the world. The world is not horribly damaged if a single charity fails; the world is served by many charities. In effect, the world is already diversified.
If the charity you gave all your donations to fails, you may feel bad, but you will get over it. Not necessarily the case if the investment you sunk all your own money into fails.
Sure, I get that. The thing I was responding to was jsalvatier’s comment that “It’s important to understand why you normally diversify. The reason why diversification is a good idea is because you have diminishing marginal utility of wealth.” S/he wasn’t talking about charity there, I don’t think… though maybe I was confused about that.
“Diminishing marginal utility of wealth” means the same thing as “don’t want to be exposed to a single point of failure”.
Yes, I think we do need a series on econ/finance.
(blink)
Two investment strategies, S1 & S2. They have the same average expected ROI, but S1 involves investing all my money in a single highly speculative company with a wider expected variance… my investment might go up or down by an order of magnitude. So, S1 suffers from a single point of failure relative to S2.
You’re saying that I could just as readily express this by saying “S1 involves diminishing marginal utility of wealth relative to S2”… yes?
Huh. I conclude that I haven’t been understanding this conversation from the git-go. In my defense, I did describe myself as finance-phobic to begin with.
No—what’s under question is the scaling behavior of your own utility function wrt money; if you exhibit diminishing marginal utility of wealth, that means you want to avoid S1.
Isn’t anything worth doing worth troubleshooting for a single point of failure?
Only if the relevant utility function is sublinear. If the relevant utility function is linear or superlinear, you aren’t risk averse, so you don’t care about a single point of failure. You care about p(failure)x0 + p(success)xU(success).
E.g. assuming altruism is all you care about, if you could pay $10,000 for a bet which would cure all the problems in Africa with probability 0.01%, and do nothing with probability 99.99%, then you should take that bet.
?
So, the project “global charity” doesn’t have a single point of failure even if all individuals choose exactly one charity each.
But I’m not sure that I’m permitted to take the global point of view—after all, I only control my own actions. From my personal vantage point, I care about charity and I care about preserving my own solvency. To secure each of these values, I should avoid allowing my plan for achieving either value to suffer from a single point of failure, no?
No.
Right, you should make sure your plan for personal solvency doesn’t have a single point of failure. As for global charity, do you really have a plan for that? My model had been that you are simply contributing to the support of some (possibly singleton) collection of plans. With the objective of maximizing expected good. If the true goal is something different—something like minimizing the chance that you have done no good at all and hence miss out on the warm and fuzzies—then, by all means, spread your charity dollar around.
Has anyone worked out numbers (according to whatever axioms) for what the best donor behaviour is for charities to encourage? Will they do better if they encourage people to give all their money to one charity, or to split it amongst several?
I’m not sure how you’d design a model for “encouragement” and its effects. Care to elaborate on what sort of model you’re thinking about?
Simply what would work out best for the charities rather than the donors. Would it be in the best interests of a given charity to have donors interested in giving all their charity money to one charity, or to have donors split it between a bunch of charities?
Can’t say I’ve seen specific models. I imagine it depends on what the utility functions of charities are. If the charities are run by people just looking to keep their jobs/get status, I’d guess they want people to donate to them rather than other charities on the margin, and that would lead to people donating to lots of charities. If charities just want to help make the world better then that implies that they want people to donate to the single best charitiy. Since there are many charities in the world, that supports theory 1.
I don’t think it depends on the motivations of those in the charities, based on my views of the insides of a few—some of which had succumbed thoroughly to the Iron Law of Institutions (where the people working there didn’t believe any more), some of which were pretty solidly oriented to their stated purpose, and some of which were in between (some recoverable, some not).
I think we can reasonably just assume, without making this question meaninglessly hypothetical, that the charities want money and we don’t need to go more deeply into it for the question to be applicable to the real world—and I am indeed asking because I am interested in the real world applications of this question:
What is the best giving strategy for charities to encourage: for people to spread their donations, or for people to give their year’s charity spend to a single charity?
(For a start, I would guess this would vary with size and fame of charity. Large charities would be more confident of being the winner, small charities would be more pleased to get anything at all. Then there is the fact that not only are the donors not independent actors, the donors and charities aren’t either. Can a donor and a charity be said to be conspiring to achieve their aim? I think they can. This complicates things, though I’m not sure if it’s enough to make a difference.)
[And I would be flatly amazed if there wasn’t considerable study on this subject already, as there have been enough large charities for long enough who would be considerably interested in the topic that anything claiming to be a radical breakthrough in thinking on the subject from outside the charity field will need to be assessed in the context of existing work, rather than being regarded as a completely new idea in an unexplored field. As I recall, there was no mention of any past work in this area at all. Is there actually none, or did the authors just not look?]