If you give your charity budget to the direct charity, you help n people. If instead you give that money to CFAR they transform two inefficient givers to efficient givers (or doubles the money an efficient giver like you can afford to give), helping 2n people. The second option gives you more value for money.
I agree with you on this, but I think CEA is that meta-charity you’re talking about, not CFAR. The reason for this is that CFAR and CEA (via Giving What We Can and 80,000 Hours) are both focused on building a community of do-gooders, but only CEA is doing it explicitly.
My understanding from current CFAR workshops is that CFAR doesn’t have much content about effectively donating or effective altruism per se, though I could be missing something.
Is there any before / after analysis of CFAR attendees on metrics like amount of money donated or donation targets?
~
Finally, neither CEA nor GiveWell is working (AFAIK) on the problem of creating a group of people who can identify new, nonobvious problems and solutions in domains where we should expect untrained human minds to fail.
I agree this is the key benefit of CFAR, though I think it’s hard to know at the moment whether CFAR is going to adequately accomplish this (though I do agree that current CFAR material is high-quality and getting better).
That’s pretty much why I wanted a commitment to certain epistemic rationality projects: to show that it’s possible to train that better (which has high VOI) and to make sure CFAR gets some momentum in that direction.
I agree with you on this, but I think CEA is that meta-charity you’re talking about, not CFAR. The reason for this is that CFAR and CEA (via Giving What We Can and 80,000 Hours) are both focused on building a community of do-gooders, but only CEA is doing it explicitly.
My understanding from current CFAR workshops is that CFAR doesn’t have much content about effectively donating or effective altruism per se, though I could be missing something.
Is there any before / after analysis of CFAR attendees on metrics like amount of money donated or donation targets?
~
I agree this is the key benefit of CFAR, though I think it’s hard to know at the moment whether CFAR is going to adequately accomplish this (though I do agree that current CFAR material is high-quality and getting better).
That’s pretty much why I wanted a commitment to certain epistemic rationality projects: to show that it’s possible to train that better (which has high VOI) and to make sure CFAR gets some momentum in that direction.