I’d say that if you’re competent to make a judgement like that, you’re already a sufficiently high-information donor that abstractions like “EA Funds” are kind of irrelevant. For instance, by that point you probably know who Nick Beckstead is, and have an opinion about whether he seems like he knows more than you about what to do, and to what extent the intermediation of the “EA Funds” mechanism and need for public accountability might increase or reduce the benefits of his information advantage.
If you use the “EA Funds” abstraction, then you’re treating giving money to the EA Long Term Future Fund managed by Nick Beckstead as the same sort of action as giving money to the EA Global Development fund managed by Elie Hassenfeld (which has largely given to GiveWell top charities). This seems obviously ridiculous to me if you have fine-grained enough opinions to have an opinion about which org’s priorities make more sense, and insofar as it doesn’t to you I’d like to hear why.
This doesn’t look to me like an argument that there is so much funging between EA Funds and GiveWell recommended charities that it’s odd to spend attention distinguishing between them? For people with some common sets of values (e.g. long-termist, placing lots of weight on the well-being of animals) it doesn’t seem like there’s a decision-relevant amount of funging between GiveWell recommendations and the EA Fund they would choose. Do we disagree about that?
I guess I interpreted Rob’s statement that “the EA Funds are usually a better fallback option than GiveWell” as shorthand for “the EA Fund relevant to your values is in expectation a better fallback option than GiveWell.” “The EA Fund relevant to your values” does seem like a useful abstraction to me.
I’d say that if you’re competent to make a judgement like that, you’re already a sufficiently high-information donor that abstractions like “EA Funds” are kind of irrelevant. For instance, by that point you probably know who Nick Beckstead is, and have an opinion about whether he seems like he knows more than you about what to do, and to what extent the intermediation of the “EA Funds” mechanism and need for public accountability might increase or reduce the benefits of his information advantage.
If you use the “EA Funds” abstraction, then you’re treating giving money to the EA Long Term Future Fund managed by Nick Beckstead as the same sort of action as giving money to the EA Global Development fund managed by Elie Hassenfeld (which has largely given to GiveWell top charities). This seems obviously ridiculous to me if you have fine-grained enough opinions to have an opinion about which org’s priorities make more sense, and insofar as it doesn’t to you I’d like to hear why.
This doesn’t look to me like an argument that there is so much funging between EA Funds and GiveWell recommended charities that it’s odd to spend attention distinguishing between them? For people with some common sets of values (e.g. long-termist, placing lots of weight on the well-being of animals) it doesn’t seem like there’s a decision-relevant amount of funging between GiveWell recommendations and the EA Fund they would choose. Do we disagree about that?
I guess I interpreted Rob’s statement that “the EA Funds are usually a better fallback option than GiveWell” as shorthand for “the EA Fund relevant to your values is in expectation a better fallback option than GiveWell.” “The EA Fund relevant to your values” does seem like a useful abstraction to me.