(One might ask: if the idea of meta-charity is so good, why don’t many more meta-charities exist than currently do?) So you might need to see a lot more hard data (perhaps verified by independent sources) before being convinced.
This is a really interesting issue, and it applies to any exceptional giving candidate, not just to meta-charities. In order to get exceptional value for money you need to (correctly) believe that you are smarter than the big donors—otherwise they’d already have funded whatever you’re planning on funding to the point where the returns diminish to the same level as everything else.
This relates to the issue of collecting lots of hard data because rationality is partly about the ability to make the right decision given a relatively small amount of data.
My tentative conclusion is that if you have no good reason to believe you’re more rational than the big money then the best thing is to invest your resources in improving your own rationality.
because rationality is partly about the ability to make the right decision given a relatively small amount of data.
And sensibly collecting obtainable data that could make a big difference for a decision. Making correct decisions with less data is harder, and so more taxing of epistemic rationality, but that difficulty means it’s often instrumentally rational to avoid such difficulty.
I’d treat the graph of GiveWell’s money moved as evidence in favour of meta (and in particular CEA) being promising, under three assumptions:
GW’s top charities really are significantly more effective than what people would otherwise be giving to (otherwise that graph would just show the amount of money uselessly moved from one place to another)
CEA is doing something orthogonal to what GW are doing (otherwise they might just be needlessly competing with each other)
CEA is part of the same “effective altruism” growth sector that GW is part of.
In a way you could regard any charity fundraising as “meta” in some sense, but the market there is already saturated in a way that I don’t think “effective giving” is. So I wouldn’t expect people to be getting such huge returns from fundraising (even if they’re trying a somewhat novel approach), but I wouldn’t count this as strong evidence against meta.
Definitely curious about what other kinds of evidence I should be on the lookout for, or for reasons why I shouldn’t take GW’s big takeoff so seriously.
I’d treat the graph of GiveWell’s money moved as evidence in favour of meta (and in particular CEA) being promising, under three assumptions:
Yes, that and the stats for Giving What We Can/CEA look pretty good.
CEA is doing something orthogonal to what GW are doing (otherwise they might just be needlessly competing with each other)
I think competition tends to be good! It keeps people on their toes, and provides a check on problems. Consider your other point:
GW’s top charities really are significantly more effective than what people would otherwise be giving to (otherwise that graph would just show the amount of money uselessly moved from one place to another)
With competitors you could check the rate of concordance, when they disagree, or look to see which organizations identify problems with data first, that sort of thing.
In order to get exceptional value for money you need to (correctly) believe that you are smarter than the big donors - >otherwise they’d already have funded whatever you’re planning on funding to the point where the returns diminish to the >same level as everything else.
That’s if you think that the big funders are rational and have similar goals as you. I think assuming they are rational is pretty close to the truth (though I’m not sure: charity doesn’t have the same feedback mechanisms as business, because if you get punished you don’t get punished in the same way). beoShaffer suggests that they just have different goals—they are aiming to make themselves look good, rather than do good. I think that could explain a lot of cases, but not all—e.g. it just doesn’t seem plausible to me for the Gates Foundation.
So I ask myself: why doesn’t Gates spend much more money on increasing revenue to good causes, through advertising etc? One answer is that he does spend such money: the Giving Pledge must be the most successful meta-charity ever. Another is that charities are restricted in how they can act by cultural norms. E.g. if they spent loads of money on advertising, then their reputation would take a big enough hit to outweigh the benefits through increased revenue.
beoShaffer suggests that they just have different goals—they are aiming to make themselves look good, rather than do good.
Agree with the part before the dash, have a subtle but important correction to the second part. While the explicit desire to look good certainly can play a role, I think it is as or more common for giving to have a different proximate cause, but to still approximate efficient signaling (rather than efficient helping) because the underlying intuitions evolved for signaling purposes.
The best way to look good to, say, exceptionally smart people and distant-future historians, is to act in almost exactly the way a genuinely good person would act.
This is a really interesting issue, and it applies to any exceptional giving candidate, not just to meta-charities. In order to get exceptional value for money you need to (correctly) believe that you are smarter than the big donors—otherwise they’d already have funded whatever you’re planning on funding to the point where the returns diminish to the same level as everything else.
This relates to the issue of collecting lots of hard data because rationality is partly about the ability to make the right decision given a relatively small amount of data.
My tentative conclusion is that if you have no good reason to believe you’re more rational than the big money then the best thing is to invest your resources in improving your own rationality.
And sensibly collecting obtainable data that could make a big difference for a decision. Making correct decisions with less data is harder, and so more taxing of epistemic rationality, but that difficulty means it’s often instrumentally rational to avoid such difficulty.
Yep, totally agree—see this comment and this post.
I’d treat the graph of GiveWell’s money moved as evidence in favour of meta (and in particular CEA) being promising, under three assumptions:
GW’s top charities really are significantly more effective than what people would otherwise be giving to (otherwise that graph would just show the amount of money uselessly moved from one place to another)
CEA is doing something orthogonal to what GW are doing (otherwise they might just be needlessly competing with each other)
CEA is part of the same “effective altruism” growth sector that GW is part of.
In a way you could regard any charity fundraising as “meta” in some sense, but the market there is already saturated in a way that I don’t think “effective giving” is. So I wouldn’t expect people to be getting such huge returns from fundraising (even if they’re trying a somewhat novel approach), but I wouldn’t count this as strong evidence against meta.
Definitely curious about what other kinds of evidence I should be on the lookout for, or for reasons why I shouldn’t take GW’s big takeoff so seriously.
Yes, that and the stats for Giving What We Can/CEA look pretty good.
I think competition tends to be good! It keeps people on their toes, and provides a check on problems. Consider your other point:
With competitors you could check the rate of concordance, when they disagree, or look to see which organizations identify problems with data first, that sort of thing.
Cannot upvote this enough. Neglected Virtue of Scholarship and all that.
That’s if you think that the big funders are rational and have similar goals as you. I think assuming they are rational is pretty close to the truth (though I’m not sure: charity doesn’t have the same feedback mechanisms as business, because if you get punished you don’t get punished in the same way). beoShaffer suggests that they just have different goals—they are aiming to make themselves look good, rather than do good. I think that could explain a lot of cases, but not all—e.g. it just doesn’t seem plausible to me for the Gates Foundation.
So I ask myself: why doesn’t Gates spend much more money on increasing revenue to good causes, through advertising etc? One answer is that he does spend such money: the Giving Pledge must be the most successful meta-charity ever. Another is that charities are restricted in how they can act by cultural norms. E.g. if they spent loads of money on advertising, then their reputation would take a big enough hit to outweigh the benefits through increased revenue.
Agree with the part before the dash, have a subtle but important correction to the second part. While the explicit desire to look good certainly can play a role, I think it is as or more common for giving to have a different proximate cause, but to still approximate efficient signaling (rather than efficient helping) because the underlying intuitions evolved for signaling purposes.
The best way to look good to, say, exceptionally smart people and distant-future historians, is to act in almost exactly the way a genuinely good person would act.