I think “exploit” is a bad way of looking at it , for the reasons that pengvado objects to. However, there’s also the possibility that you’re running an incorrect algorithm, or have otherwise made some fault in reasoning when selecting the Top #1 charity.
Also, if numerous people run the same algorithm, you’re more likely to run in to over-saturation issues with a “single charity” model (a thousand people all decide to donate $100 this month—suddenly Charity A has $100K, and can only efficiently use, say, $20K). I’d mostly see this coming up when a major influencer (such as a news story) pushes a large number of people to suddenly donate, without being able to easily “cap” that influence (i.e. the news is unlikely to say “okay, Haiti disaster funding is good, stop now”)
It’s important to realize that if we have, say, a 50% chance of being wrong about each charity, and we’re donating $100, we’re still producing a net result of $50 worth of charity regardless of how we split it. However, if we put all our eggs in one basket, we get either $100 or $0 worth of charity. With five different charities, we have a bell curve of $100, $80, $60, $40, $20, $0 as possibilities.
If charity is linear, it doesn’t matter. However, I’d suspect that there’s incentives to a bell curve—both because it minimizes the worst case $0 benefit scenario, and simply out of an aesthetic/personal preference for less risky investments. (If nothing else, risk-adverse individuals will probably donate more to a bell curve than an “all or nothing” gambit)
Obviously I’m simplifying with the idea of an “all or nothing” gambit for the most part (but a fraudulent charity really could be such!), but I think it illustrates why splitting donations really is beneficial even if “shut up and multiply” says they’re approximately equal.
Also, if numerous people run the same algorithm, you’re more likely to run in to over-saturation issues with a “single charity” model (a thousand people all decide to donate $100 this month—suddenly Charity A has $100K, and can only efficiently use, say, $20K). I’d mostly see this coming up when a major influencer (such as a news story) pushes a large number of people to suddenly donate, without being able to easily “cap” that influence (i.e. the news is unlikely to say “okay, Haiti disaster funding is good, stop now”)
If ‘numerous’ people manage to actually select and overload the same charity, that charity probably has someone running a similar algorithm and will be smart enough to pass the money on to choice #2. (Funnily enough, charities can and do donate to other charities.)
“that charity probably has someone running a similar algorithm”
That does not follow, unless you’re assuming a community of perfect rationalists.
I’m assuming here a community of average people, where Reporter Sara happened to run a personal piece about her favorite charity, Honest Bob’s Second Hand Charity, which pulls in $50K/year. The story goes viral, and suddenly Honest Bob has a million dollars in donations, no clue how to best put it to use, and a genuine conviction that his charity is truly the best one out there.
Even if we assume a community of rational donators, that doesn’t mean the charity is itself rational. If the charity won’t rationally handle over-saturation (over-confidence in it’s own abilities, lack of knowledge about other charities, overhead of distributing, social repercussions, etc., etc.), then the community has to handle it. The ideal would probably be a meta-organization: Honest Bob can only really handle $50K more, so everyone donates $100, $50K goes to Honest Bob, and then the rest is split proportionally and refunded or invested in to second-pick charities.
However, the meta-organization is just running the same splitting algorithm on a larger scale. You could just as easily have everyone donate $5 instead of $100, and Honest Bob now has his $50K without the overhead expenses of such a meta-organization.
So, unless you’re dealing with a Perfectly Rational charity that can both recognize and respond to it’s own over-saturation point, splitting is still a rational tactic.
I think “exploit” is a bad way of looking at it , for the reasons that pengvado objects to. However, there’s also the possibility that you’re running an incorrect algorithm, or have otherwise made some fault in reasoning when selecting the Top #1 charity.
Also, if numerous people run the same algorithm, you’re more likely to run in to over-saturation issues with a “single charity” model (a thousand people all decide to donate $100 this month—suddenly Charity A has $100K, and can only efficiently use, say, $20K). I’d mostly see this coming up when a major influencer (such as a news story) pushes a large number of people to suddenly donate, without being able to easily “cap” that influence (i.e. the news is unlikely to say “okay, Haiti disaster funding is good, stop now”)
It’s important to realize that if we have, say, a 50% chance of being wrong about each charity, and we’re donating $100, we’re still producing a net result of $50 worth of charity regardless of how we split it. However, if we put all our eggs in one basket, we get either $100 or $0 worth of charity. With five different charities, we have a bell curve of $100, $80, $60, $40, $20, $0 as possibilities.
If charity is linear, it doesn’t matter. However, I’d suspect that there’s incentives to a bell curve—both because it minimizes the worst case $0 benefit scenario, and simply out of an aesthetic/personal preference for less risky investments. (If nothing else, risk-adverse individuals will probably donate more to a bell curve than an “all or nothing” gambit)
Obviously I’m simplifying with the idea of an “all or nothing” gambit for the most part (but a fraudulent charity really could be such!), but I think it illustrates why splitting donations really is beneficial even if “shut up and multiply” says they’re approximately equal.
If ‘numerous’ people manage to actually select and overload the same charity, that charity probably has someone running a similar algorithm and will be smart enough to pass the money on to choice #2. (Funnily enough, charities can and do donate to other charities.)
“that charity probably has someone running a similar algorithm”
That does not follow, unless you’re assuming a community of perfect rationalists.
I’m assuming here a community of average people, where Reporter Sara happened to run a personal piece about her favorite charity, Honest Bob’s Second Hand Charity, which pulls in $50K/year. The story goes viral, and suddenly Honest Bob has a million dollars in donations, no clue how to best put it to use, and a genuine conviction that his charity is truly the best one out there.
Even if we assume a community of rational donators, that doesn’t mean the charity is itself rational. If the charity won’t rationally handle over-saturation (over-confidence in it’s own abilities, lack of knowledge about other charities, overhead of distributing, social repercussions, etc., etc.), then the community has to handle it. The ideal would probably be a meta-organization: Honest Bob can only really handle $50K more, so everyone donates $100, $50K goes to Honest Bob, and then the rest is split proportionally and refunded or invested in to second-pick charities.
However, the meta-organization is just running the same splitting algorithm on a larger scale. You could just as easily have everyone donate $5 instead of $100, and Honest Bob now has his $50K without the overhead expenses of such a meta-organization.
So, unless you’re dealing with a Perfectly Rational charity that can both recognize and respond to it’s own over-saturation point, splitting is still a rational tactic.