Okay, here’s the model: the expected utility of $1 to chosen top 5 charities is nearly equal (due to inaccuracy in evaluation of the utility), and the charities are nearly linear (not super-linear). The expected utility of donating $x to charity i is x*a[i] , and for top 5 a[i] the a[i] values are very close to equal. [They are very close to equal because of your inability to evaluate the utilities of donations to charities]
(for reasonable values of x; we already determined that multi-billionaire needs to diversify)
Thus the combined utility of paying $100 to each of the top 5 charities is then nearly equal to utility of paying $500 to the top one. There is slight loss because the expected utility of the #1 charity is very slightly above that of #5.
At the same time, the strategic reasoning is as follows: the function i (and people like me) used for selecting top (or top 5 even) charities may be exploitable. When the donation is split between top 5, each has 1⁄5 the incentive to exploit, so the decision to split between top 5, while unable to affect anything about the contribution right now, affects the future payoff of exploitative strategies (and if known beforehand, affects the past payoff estimates as well).
Of course the above reasoning does not work at all if you are grossly over confident in your evaluations of charities and assume some giant differences between expected utility of the top 5, differences which you furthermore had detected correctly.
I think “exploit” is a bad way of looking at it , for the reasons that pengvado objects to. However, there’s also the possibility that you’re running an incorrect algorithm, or have otherwise made some fault in reasoning when selecting the Top #1 charity.
Also, if numerous people run the same algorithm, you’re more likely to run in to over-saturation issues with a “single charity” model (a thousand people all decide to donate $100 this month—suddenly Charity A has $100K, and can only efficiently use, say, $20K). I’d mostly see this coming up when a major influencer (such as a news story) pushes a large number of people to suddenly donate, without being able to easily “cap” that influence (i.e. the news is unlikely to say “okay, Haiti disaster funding is good, stop now”)
It’s important to realize that if we have, say, a 50% chance of being wrong about each charity, and we’re donating $100, we’re still producing a net result of $50 worth of charity regardless of how we split it. However, if we put all our eggs in one basket, we get either $100 or $0 worth of charity. With five different charities, we have a bell curve of $100, $80, $60, $40, $20, $0 as possibilities.
If charity is linear, it doesn’t matter. However, I’d suspect that there’s incentives to a bell curve—both because it minimizes the worst case $0 benefit scenario, and simply out of an aesthetic/personal preference for less risky investments. (If nothing else, risk-adverse individuals will probably donate more to a bell curve than an “all or nothing” gambit)
Obviously I’m simplifying with the idea of an “all or nothing” gambit for the most part (but a fraudulent charity really could be such!), but I think it illustrates why splitting donations really is beneficial even if “shut up and multiply” says they’re approximately equal.
Also, if numerous people run the same algorithm, you’re more likely to run in to over-saturation issues with a “single charity” model (a thousand people all decide to donate $100 this month—suddenly Charity A has $100K, and can only efficiently use, say, $20K). I’d mostly see this coming up when a major influencer (such as a news story) pushes a large number of people to suddenly donate, without being able to easily “cap” that influence (i.e. the news is unlikely to say “okay, Haiti disaster funding is good, stop now”)
If ‘numerous’ people manage to actually select and overload the same charity, that charity probably has someone running a similar algorithm and will be smart enough to pass the money on to choice #2. (Funnily enough, charities can and do donate to other charities.)
“that charity probably has someone running a similar algorithm”
That does not follow, unless you’re assuming a community of perfect rationalists.
I’m assuming here a community of average people, where Reporter Sara happened to run a personal piece about her favorite charity, Honest Bob’s Second Hand Charity, which pulls in $50K/year. The story goes viral, and suddenly Honest Bob has a million dollars in donations, no clue how to best put it to use, and a genuine conviction that his charity is truly the best one out there.
Even if we assume a community of rational donators, that doesn’t mean the charity is itself rational. If the charity won’t rationally handle over-saturation (over-confidence in it’s own abilities, lack of knowledge about other charities, overhead of distributing, social repercussions, etc., etc.), then the community has to handle it. The ideal would probably be a meta-organization: Honest Bob can only really handle $50K more, so everyone donates $100, $50K goes to Honest Bob, and then the rest is split proportionally and refunded or invested in to second-pick charities.
However, the meta-organization is just running the same splitting algorithm on a larger scale. You could just as easily have everyone donate $5 instead of $100, and Honest Bob now has his $50K without the overhead expenses of such a meta-organization.
So, unless you’re dealing with a Perfectly Rational charity that can both recognize and respond to it’s own over-saturation point, splitting is still a rational tactic.
If there’s many charities competing to exploit the same ranking heuristic, then your proposal replaces an incentive of (probability p of stealing all of the donations) with (probability 5*p of stealing 1⁄5 of the donations). That doesn’t look like an improvement to me.
The effort towards exploitation of a ranking heuristics is not restricted to set [the most convenient for you value that you pick when you rationalize], 0 . The effort to pay off curve is flattened out at the high effort side when the higher level of efforts don’t get you any better than being in the top 5.
It is clear you are rationalizing; the 5p>1 when p>0.2 (which it can be if one is to expend sufficiently greater effort towards raising p than anyone else); and thus 5p can’t possibly make sense.
Your first paragraph assumes that giving $5 to the top charity is of no more value than giving $1 to that charity.
If you don’t believe me, come up with a formal model that doesn’t assume that and see what it says. Just do the math.
Okay, here’s the model: the expected utility of $1 to chosen top 5 charities is nearly equal (due to inaccuracy in evaluation of the utility), and the charities are nearly linear (not super-linear). The expected utility of donating $x to charity i is x*a[i] , and for top 5 a[i] the a[i] values are very close to equal. [They are very close to equal because of your inability to evaluate the utilities of donations to charities]
(for reasonable values of x; we already determined that multi-billionaire needs to diversify)
Thus the combined utility of paying $100 to each of the top 5 charities is then nearly equal to utility of paying $500 to the top one. There is slight loss because the expected utility of the #1 charity is very slightly above that of #5.
At the same time, the strategic reasoning is as follows: the function i (and people like me) used for selecting top (or top 5 even) charities may be exploitable. When the donation is split between top 5, each has 1⁄5 the incentive to exploit, so the decision to split between top 5, while unable to affect anything about the contribution right now, affects the future payoff of exploitative strategies (and if known beforehand, affects the past payoff estimates as well).
Of course the above reasoning does not work at all if you are grossly over confident in your evaluations of charities and assume some giant differences between expected utility of the top 5, differences which you furthermore had detected correctly.
I think “exploit” is a bad way of looking at it , for the reasons that pengvado objects to. However, there’s also the possibility that you’re running an incorrect algorithm, or have otherwise made some fault in reasoning when selecting the Top #1 charity.
Also, if numerous people run the same algorithm, you’re more likely to run in to over-saturation issues with a “single charity” model (a thousand people all decide to donate $100 this month—suddenly Charity A has $100K, and can only efficiently use, say, $20K). I’d mostly see this coming up when a major influencer (such as a news story) pushes a large number of people to suddenly donate, without being able to easily “cap” that influence (i.e. the news is unlikely to say “okay, Haiti disaster funding is good, stop now”)
It’s important to realize that if we have, say, a 50% chance of being wrong about each charity, and we’re donating $100, we’re still producing a net result of $50 worth of charity regardless of how we split it. However, if we put all our eggs in one basket, we get either $100 or $0 worth of charity. With five different charities, we have a bell curve of $100, $80, $60, $40, $20, $0 as possibilities.
If charity is linear, it doesn’t matter. However, I’d suspect that there’s incentives to a bell curve—both because it minimizes the worst case $0 benefit scenario, and simply out of an aesthetic/personal preference for less risky investments. (If nothing else, risk-adverse individuals will probably donate more to a bell curve than an “all or nothing” gambit)
Obviously I’m simplifying with the idea of an “all or nothing” gambit for the most part (but a fraudulent charity really could be such!), but I think it illustrates why splitting donations really is beneficial even if “shut up and multiply” says they’re approximately equal.
If ‘numerous’ people manage to actually select and overload the same charity, that charity probably has someone running a similar algorithm and will be smart enough to pass the money on to choice #2. (Funnily enough, charities can and do donate to other charities.)
“that charity probably has someone running a similar algorithm”
That does not follow, unless you’re assuming a community of perfect rationalists.
I’m assuming here a community of average people, where Reporter Sara happened to run a personal piece about her favorite charity, Honest Bob’s Second Hand Charity, which pulls in $50K/year. The story goes viral, and suddenly Honest Bob has a million dollars in donations, no clue how to best put it to use, and a genuine conviction that his charity is truly the best one out there.
Even if we assume a community of rational donators, that doesn’t mean the charity is itself rational. If the charity won’t rationally handle over-saturation (over-confidence in it’s own abilities, lack of knowledge about other charities, overhead of distributing, social repercussions, etc., etc.), then the community has to handle it. The ideal would probably be a meta-organization: Honest Bob can only really handle $50K more, so everyone donates $100, $50K goes to Honest Bob, and then the rest is split proportionally and refunded or invested in to second-pick charities.
However, the meta-organization is just running the same splitting algorithm on a larger scale. You could just as easily have everyone donate $5 instead of $100, and Honest Bob now has his $50K without the overhead expenses of such a meta-organization.
So, unless you’re dealing with a Perfectly Rational charity that can both recognize and respond to it’s own over-saturation point, splitting is still a rational tactic.
If there’s many charities competing to exploit the same ranking heuristic, then your proposal replaces an incentive of (probability p of stealing all of the donations) with (probability 5*p of stealing 1⁄5 of the donations). That doesn’t look like an improvement to me.
http://lesswrong.com/lw/aid/heuristics_and_biases_in_charity/63gy—second half addresses specifically why “5p 1/5” might be preferred to 1p. In short, “5p 1/5″ produces a bell curve instead of an “all or nothing” gambit.
The effort towards exploitation of a ranking heuristics is not restricted to set [the most convenient for you value that you pick when you rationalize], 0 . The effort to pay off curve is flattened out at the high effort side when the higher level of efforts don’t get you any better than being in the top 5.
It is clear you are rationalizing; the 5p>1 when p>0.2 (which it can be if one is to expend sufficiently greater effort towards raising p than anyone else); and thus 5p can’t possibly make sense.