Note that the risk of charitable assistance not helping can also matter to the giver when they would have been among those helped—for example, when considering asteroid deflection charities, or disease research charities.
As I understand it, the psychology of charitable giving means that this is quite often the case in practice—people tend to support charities whose work also happens to benefit them, their sick relatives, their pets, their gender—or whatever.
While true, this runs into the main caveat of the “only support one charity” advice- you have to be donating an amount small enough to not change the marginal value of a dollar. When that’s true, dollars and utils are linearly related, meaning there’s no direct benefit for specialization.
Also, for existential risk the negative correlation hedge doesn’t matter. Hedging your bets only pays off when one bet succeeds and another bet fails, but with x-risks, if a single bet fails all of the bets stop mattering. So you should figure out which x-risk gives you the strongest returns when you fight it, and devote all your resources to that until the marginal value drops below another x-risk.
Now, there is a strong argument for diversifying due to ignorance- if you think that A will reduce risk by 5+/-1 and B will reduce risk by 4+/-1.5, then you should give 71% of your money to A and 29% to B.
To illustrate what I meant, if you are giving to charities that aim to cure a fatal disease that you happen to have—then that means you have an increased risk of ruin if your donations don’t help—broadly similar to the one that investors diversify their portfolio to help prevent if their individual investments don’t pay off.
Of course, that is not selfless altruism—but it is still giving money to charities with the aim of helping the charities to meet their goals—rather than for signalling purposes.
This still isn’t enough to invalidate the argument against diversifying. I’m not fully convinced by it, but...
Suppose your money would be enough to increase charity A’s chance of finding a cure from 50% to 50.08%, or charity B’s chance from 50% to 50.06%, or by splitting the money increase A’s to 50.04 and B’s to 50.03, I’m pretty sure you’re better off giving it all to A, which can increase the chance of at least one finding a cure from 75% to 75.04%.
Suppose both charities have diminishing returns so funding to increase chance beyond 55% is less effective. That’s irrelevant to the situation where we aren’t in that range.
Suppose charity A had a 30% of being either corrupt or taking a completely useless approach to the cure, in which case it wouldn’t find it no matter how much money was donated. So long as the 50%, 50.04% and 50.08% figures have taken this into account, you don’t need to consider it further.
Suppose one of the charities had already hit diminishing returns, and would have been able to increase from 1% chance of success to 2% with the amount of your donation, if they hadn’t already had enough money for a 50% chance. That’s irrelevant to the situation where we aren’t in that range.
I only chose 50% to make the maths a bit easier, so long as neither is near 100% similar arguments apply, though you need to make sure you’re considering each charity’s effect on the total chance for a cure, not their chance of discovering it themselves.
If you consider the good to be produced (- log(probability(no cure)), so 50% is one unit, 75% is two units, etc., then assuming independence you can just add the amounts from at least charity.
This actually has an increasing returns effect, which may partially or entirely mitigate a diminishing returns one. Regardless, if you are still small, only the derivatives matter.
Diversifying can pay off—even in relatively simple models—where you have inaccurate information. If you think charity A is best—but it ultimately turns out that that is because they spend 99% of their budget on marketing and advertising—then a portfolio with A, B, and C in it would have been highly likely to produce better results than giving everything to charity A.
Maybe you should obtain better information. However, in practice, assessing charities is poorly funded, there are controversies over which ones best support which goals—and getting better information on such topics is another way of spending money.
The bigger the chances of your information being inaccurate, the more it pays to hedge. Inaccurate estimates seem rather likely in the case of “risky” charities—where the benifit involves multiplying a hypothetical small probabiltiy by a hypothetical large benefit—and it is challenging to measure efficacy.
If you mean this my comment would be that that proposes accounting for uncertainty by appropriately penalising the utitilies associated with the charities you are unsure about. However, charities, especially bad charities, might well be trying to manipulate people’s percieved confidence that they are sound—so those figures might be bad.
If perceived utility is negatively correlated (at the top end) with actual utility, as in your example, then your strategy is superior to putting it all in the perceived-best. However, if you expect this to be the case, then you should update your beliefs on perceived utility. If the figures might be bad, account for that in the figures!
If there is even a small correlation, putting it all in one is optimal.
Note that the risk of charitable assistance not helping can also matter to the giver when they would have been among those helped—for example, when considering asteroid deflection charities, or disease research charities.
As I understand it, the psychology of charitable giving means that this is quite often the case in practice—people tend to support charities whose work also happens to benefit them, their sick relatives, their pets, their gender—or whatever.
While true, this runs into the main caveat of the “only support one charity” advice- you have to be donating an amount small enough to not change the marginal value of a dollar. When that’s true, dollars and utils are linearly related, meaning there’s no direct benefit for specialization.
Also, for existential risk the negative correlation hedge doesn’t matter. Hedging your bets only pays off when one bet succeeds and another bet fails, but with x-risks, if a single bet fails all of the bets stop mattering. So you should figure out which x-risk gives you the strongest returns when you fight it, and devote all your resources to that until the marginal value drops below another x-risk.
Now, there is a strong argument for diversifying due to ignorance- if you think that A will reduce risk by 5+/-1 and B will reduce risk by 4+/-1.5, then you should give 71% of your money to A and 29% to B.
To illustrate what I meant, if you are giving to charities that aim to cure a fatal disease that you happen to have—then that means you have an increased risk of ruin if your donations don’t help—broadly similar to the one that investors diversify their portfolio to help prevent if their individual investments don’t pay off.
Of course, that is not selfless altruism—but it is still giving money to charities with the aim of helping the charities to meet their goals—rather than for signalling purposes.
This still isn’t enough to invalidate the argument against diversifying. I’m not fully convinced by it, but...
Suppose your money would be enough to increase charity A’s chance of finding a cure from 50% to 50.08%, or charity B’s chance from 50% to 50.06%, or by splitting the money increase A’s to 50.04 and B’s to 50.03, I’m pretty sure you’re better off giving it all to A, which can increase the chance of at least one finding a cure from 75% to 75.04%.
Suppose both charities have diminishing returns so funding to increase chance beyond 55% is less effective. That’s irrelevant to the situation where we aren’t in that range.
Suppose charity A had a 30% of being either corrupt or taking a completely useless approach to the cure, in which case it wouldn’t find it no matter how much money was donated. So long as the 50%, 50.04% and 50.08% figures have taken this into account, you don’t need to consider it further.
Suppose one of the charities had already hit diminishing returns, and would have been able to increase from 1% chance of success to 2% with the amount of your donation, if they hadn’t already had enough money for a 50% chance. That’s irrelevant to the situation where we aren’t in that range.
I only chose 50% to make the maths a bit easier, so long as neither is near 100% similar arguments apply, though you need to make sure you’re considering each charity’s effect on the total chance for a cure, not their chance of discovering it themselves.
If you consider the good to be produced (- log(probability(no cure)), so 50% is one unit, 75% is two units, etc., then assuming independence you can just add the amounts from at least charity.
This actually has an increasing returns effect, which may partially or entirely mitigate a diminishing returns one. Regardless, if you are still small, only the derivatives matter.
Diversifying can pay off—even in relatively simple models—where you have inaccurate information. If you think charity A is best—but it ultimately turns out that that is because they spend 99% of their budget on marketing and advertising—then a portfolio with A, B, and C in it would have been highly likely to produce better results than giving everything to charity A.
Maybe you should obtain better information. However, in practice, assessing charities is poorly funded, there are controversies over which ones best support which goals—and getting better information on such topics is another way of spending money.
The bigger the chances of your information being inaccurate, the more it pays to hedge. Inaccurate estimates seem rather likely in the case of “risky” charities—where the benifit involves multiplying a hypothetical small probabiltiy by a hypothetical large benefit—and it is challenging to measure efficacy.
I’m hitting the ‘bozo button’ for Tim in this conversation. The math has been explained to him several times over.
If you mean this my comment would be that that proposes accounting for uncertainty by appropriately penalising the utitilies associated with the charities you are unsure about. However, charities, especially bad charities, might well be trying to manipulate people’s percieved confidence that they are sound—so those figures might be bad.
If perceived utility is negatively correlated (at the top end) with actual utility, as in your example, then your strategy is superior to putting it all in the perceived-best. However, if you expect this to be the case, then you should update your beliefs on perceived utility. If the figures might be bad, account for that in the figures!
If there is even a small correlation, putting it all in one is optimal.