I’d guess that your true rejection might be wanting to avoid the emotional pain of failure if you stake all $ on one particularly good-looking charity which then goes on to be exposed as a fraud.
Or possibly your true rejection is the emotional hit you’d take from worrying about whether you got it wrong.
There are many non-rational reasons people have for placing a certainty premium on charity.
I’d probably actually diversify since it seems like a positive sum game between the egoist in me and the altruist in me. The egoist more wants actual, real status/reward, which tends to only be gained when you pick an actual winner. People don’t give you any praise for anything other than actual, real successes, I find. And if the expected marginal utilities of the top 5 causes are comparable (the same to within a factor of 2), the altruist isn’t actually conceding very much.
It sounds like meta -errors are not your true rejection. I’d guess that your true rejection might be wanting to avoid the emotional pain of failure if you stake all $ on one particularly good-looking charity which then goes on to be exposed as a fraud.
That’s a pretty good guess. Probably correct. I wonder, though, how many people manage to care about charity so directly as to value saving lives literally for the sake of saving lives, rather than for the emotional satisfaction associated with it. I think the odds of me suffering from plague, reincarnation, violent uprising, etc. that is partly caused by me donating to a slightly suboptimal basket of charities are basically negligible. What, then, is the moral or philosophical theory that says that I should privilege the act of donating my whole charity budget to one maximally efficient charity over the emotional satisfaction of donating to a basket of moderately efficient charities? I enjoy the latter more; I know because I have tried each method a few times in different years. Why should I personally do something that I enjoy less? I don’t mean to be triumphant about this; possibly there is a very good reason why I should do something that I enjoy less. I just don’t know what it is. And don’t say something blunt like “it’ll save more lives.” I know it will save more lives on average, and I’ve noticed that I don’t care. Should I work to change this about myself, and if so, why?
I see it as a “deal” between an egoist subagent and an altruist subagent.
The crucially important factor in this deal is just what the effectiveness ratio is between charity #1 and charities#2, #3, #4, #5, #6. If the marginal good done per $ is similar between all of them, then OK go ahead and diversify.
All right, well, let’s consider the least convenient example. Suppose the estimated marginal good between charity #1 and #6 is off by a factor of 8 -- enough to horrify the altruist, but barely enough for the egoist (who primarily likes to think that he’s being useful on lots of the most important problems) to even notice.
What can I tell the moderator subagent that would make him want to side in favor of the altruist subagent?
Well instead of spreading the money between all 6 charities, why not reduce your donation by 50% but donate all of it to #1, and then give the remaining 50% to the egoist subagent to buy something nice with?
It’s good thinking, but this particular egoist primarily likes to think that he’s being useful on lots of apparently important problems. He can’t be bribed with ordinary status symbols like fancy watches. Is there a way to spend money to trick yourself into thinking you’re useful? None immediately springs to mind, but I guess there might be one or two.
Which actually isn’t all that irrational if we think of it as a decision theory problem with diminishing returns on money—making sure that at least some of your money is used well becomes more important than gambling that all of it is used well.
Of course, given the nature of what’s being dome with the money, the returns diminish much, much more slowly than we’re used to; diversity shouldn’t be a concern until you’re Onassis-ish rich.
You can still integrate. I doubt that the meta-errors are really important differentiators between villagereach and Oxfam.
It sounds like meta -errors are not your true rejection
I’d guess that your true rejection might be wanting to avoid the emotional pain of failure if you stake all $ on one particularly good-looking charity which then goes on to be exposed as a fraud.
Or possibly your true rejection is the emotional hit you’d take from worrying about whether you got it wrong.
There are many non-rational reasons people have for placing a certainty premium on charity.
I agree. I share Mass_Driver’s emotional desire to diversify even though I know it’s wrong.
I’d probably actually diversify since it seems like a positive sum game between the egoist in me and the altruist in me. The egoist more wants actual, real status/reward, which tends to only be gained when you pick an actual winner. People don’t give you any praise for anything other than actual, real successes, I find. And if the expected marginal utilities of the top 5 causes are comparable (the same to within a factor of 2), the altruist isn’t actually conceding very much.
That’s a pretty good guess. Probably correct. I wonder, though, how many people manage to care about charity so directly as to value saving lives literally for the sake of saving lives, rather than for the emotional satisfaction associated with it. I think the odds of me suffering from plague, reincarnation, violent uprising, etc. that is partly caused by me donating to a slightly suboptimal basket of charities are basically negligible. What, then, is the moral or philosophical theory that says that I should privilege the act of donating my whole charity budget to one maximally efficient charity over the emotional satisfaction of donating to a basket of moderately efficient charities? I enjoy the latter more; I know because I have tried each method a few times in different years. Why should I personally do something that I enjoy less? I don’t mean to be triumphant about this; possibly there is a very good reason why I should do something that I enjoy less. I just don’t know what it is. And don’t say something blunt like “it’ll save more lives.” I know it will save more lives on average, and I’ve noticed that I don’t care. Should I work to change this about myself, and if so, why?
I see it as a “deal” between an egoist subagent and an altruist subagent.
The crucially important factor in this deal is just what the effectiveness ratio is between charity #1 and charities#2, #3, #4, #5, #6. If the marginal good done per $ is similar between all of them, then OK go ahead and diversify.
All right, well, let’s consider the least convenient example. Suppose the estimated marginal good between charity #1 and #6 is off by a factor of 8 -- enough to horrify the altruist, but barely enough for the egoist (who primarily likes to think that he’s being useful on lots of the most important problems) to even notice.
What can I tell the moderator subagent that would make him want to side in favor of the altruist subagent?
Well instead of spreading the money between all 6 charities, why not reduce your donation by 50% but donate all of it to #1, and then give the remaining 50% to the egoist subagent to buy something nice with?
It’s good thinking, but this particular egoist primarily likes to think that he’s being useful on lots of apparently important problems. He can’t be bribed with ordinary status symbols like fancy watches. Is there a way to spend money to trick yourself into thinking you’re useful? None immediately springs to mind, but I guess there might be one or two.
you could spend 90% of the money on cause1 and split the remaining 10% between the rest
Thank you.
Which actually isn’t all that irrational if we think of it as a decision theory problem with diminishing returns on money—making sure that at least some of your money is used well becomes more important than gambling that all of it is used well.
Of course, given the nature of what’s being dome with the money, the returns diminish much, much more slowly than we’re used to; diversity shouldn’t be a concern until you’re Onassis-ish rich.
As clearly stated above, for small donors marginal returns don’t diminish.