I see you as arguing that GW/Open Phil might change its strategic outlook in the future and that their disclosures aren’t high precision so we can’t rule out that (at some point in the future or even today) giving to GW recommended charities could lead Open Phil to give more to orgs like those in the EA Funds.
That doesn’t strike me as sufficient to argue that GW recommended charities funge so heavily against EA funds that it’s “odd to spend attention distinguishing them, vs spending effort distinguishing substantially different strategies.”
Here’s a potentially more specific way to get at what I mean.
Let’s say that somebody has long-termist values and believes that the orgs supported by the Long Term Future EA Fund in expectation have a much better impact on the long-term future than GW recommended charities. In particular, let’s say she believes that (absent funging) giving $1 to GW recommended charities would be as valuable as giving $100 to the EA Long Term Future Fund.
You’re saying that she should reduce her estimate because Open Phil may change its strategy or the blog post may be an imprecise guide to Open Phil’s strategy so there’s some probability that giving $1 to GW recommended charities could cause Open Phil to reallocate some money from GW recommended charities toward the orgs funded by the Long Term Future Fund.
In expectation, how much money do you think is reallocated from GW recommended charities toward orgs like those funded by the Long Term Future Fund for every $1 given to GW recommended charities? In other words, by what percent should this person adjust down their estimate of the difference in effectiveness?
Personally, I’d guess it’s lower than 15% and I’d be quite surprised to hear you say you think it’s as high as 33%. This would still leave a difference that easily clears the bar for “large enough to pay attention to.”
Fwiw, to the extent that donors to GW are getting funged, I think it’s much more likely that they are funging with other developing world interventions (e.g. one recommended org’s hits diminishing returns and so funding already targeted toward developing world interventions goes to a different developing world health org instead).
I’m guessing that you have other objections to EA Funds (some of which I think are expressed in the posts you linked although I haven’t had a chance to reread them). Is it possible that funging with GW top charities isn’t really your true objection?
Update: Nick’s recent comment on the EA Forum sure suggests there is a high level of funging, though maybe not 100%, and that giving a very large amount of money to EA Funds to some extent may cause him to redirect his attention from allocating Open Phil money to allocating EA Funds money. (This seems basically reasonable on Nick’s part.) So it’s not obvious that an extra dollar of giving to EA Funds corresponds to anything like an extra dollar of spending within that focus area.
Overall I expect *lots* of things like that, not just in the areas where people have asked questions publicly.
I don’t understand why this is evidence that “EA Funds (other than the global health and development one) currently funges heavily with GiveWell recommended charities”, which was Howie’s original question. It seems like evidence that donations to OpenPhil (which afaik cannot be made by individual donors) funge against donations to the long-term future EA fund.
I’d say that if you’re competent to make a judgement like that, you’re already a sufficiently high-information donor that abstractions like “EA Funds” are kind of irrelevant. For instance, by that point you probably know who Nick Beckstead is, and have an opinion about whether he seems like he knows more than you about what to do, and to what extent the intermediation of the “EA Funds” mechanism and need for public accountability might increase or reduce the benefits of his information advantage.
If you use the “EA Funds” abstraction, then you’re treating giving money to the EA Long Term Future Fund managed by Nick Beckstead as the same sort of action as giving money to the EA Global Development fund managed by Elie Hassenfeld (which has largely given to GiveWell top charities). This seems obviously ridiculous to me if you have fine-grained enough opinions to have an opinion about which org’s priorities make more sense, and insofar as it doesn’t to you I’d like to hear why.
This doesn’t look to me like an argument that there is so much funging between EA Funds and GiveWell recommended charities that it’s odd to spend attention distinguishing between them? For people with some common sets of values (e.g. long-termist, placing lots of weight on the well-being of animals) it doesn’t seem like there’s a decision-relevant amount of funging between GiveWell recommendations and the EA Fund they would choose. Do we disagree about that?
I guess I interpreted Rob’s statement that “the EA Funds are usually a better fallback option than GiveWell” as shorthand for “the EA Fund relevant to your values is in expectation a better fallback option than GiveWell.” “The EA Fund relevant to your values” does seem like a useful abstraction to me.
I see you as arguing that GW/Open Phil might change its strategic outlook in the future and that their disclosures aren’t high precision so we can’t rule out that (at some point in the future or even today) giving to GW recommended charities could lead Open Phil to give more to orgs like those in the EA Funds.
That doesn’t strike me as sufficient to argue that GW recommended charities funge so heavily against EA funds that it’s “odd to spend attention distinguishing them, vs spending effort distinguishing substantially different strategies.”
Here’s a potentially more specific way to get at what I mean.
Let’s say that somebody has long-termist values and believes that the orgs supported by the Long Term Future EA Fund in expectation have a much better impact on the long-term future than GW recommended charities. In particular, let’s say she believes that (absent funging) giving $1 to GW recommended charities would be as valuable as giving $100 to the EA Long Term Future Fund.
You’re saying that she should reduce her estimate because Open Phil may change its strategy or the blog post may be an imprecise guide to Open Phil’s strategy so there’s some probability that giving $1 to GW recommended charities could cause Open Phil to reallocate some money from GW recommended charities toward the orgs funded by the Long Term Future Fund.
In expectation, how much money do you think is reallocated from GW recommended charities toward orgs like those funded by the Long Term Future Fund for every $1 given to GW recommended charities? In other words, by what percent should this person adjust down their estimate of the difference in effectiveness?
Personally, I’d guess it’s lower than 15% and I’d be quite surprised to hear you say you think it’s as high as 33%. This would still leave a difference that easily clears the bar for “large enough to pay attention to.”
Fwiw, to the extent that donors to GW are getting funged, I think it’s much more likely that they are funging with other developing world interventions (e.g. one recommended org’s hits diminishing returns and so funding already targeted toward developing world interventions goes to a different developing world health org instead).
I’m guessing that you have other objections to EA Funds (some of which I think are expressed in the posts you linked although I haven’t had a chance to reread them). Is it possible that funging with GW top charities isn’t really your true objection?
Update: Nick’s recent comment on the EA Forum sure suggests there is a high level of funging, though maybe not 100%, and that giving a very large amount of money to EA Funds to some extent may cause him to redirect his attention from allocating Open Phil money to allocating EA Funds money. (This seems basically reasonable on Nick’s part.) So it’s not obvious that an extra dollar of giving to EA Funds corresponds to anything like an extra dollar of spending within that focus area.
Overall I expect *lots* of things like that, not just in the areas where people have asked questions publicly.
I don’t understand why this is evidence that “EA Funds (other than the global health and development one) currently funges heavily with GiveWell recommended charities”, which was Howie’s original question. It seems like evidence that donations to OpenPhil (which afaik cannot be made by individual donors) funge against donations to the long-term future EA fund.
The definitions of and boundaries between Open Phil, GiveWell, and Good Ventures, as financial or decisionmaking entities, are not clear.
I’d say that if you’re competent to make a judgement like that, you’re already a sufficiently high-information donor that abstractions like “EA Funds” are kind of irrelevant. For instance, by that point you probably know who Nick Beckstead is, and have an opinion about whether he seems like he knows more than you about what to do, and to what extent the intermediation of the “EA Funds” mechanism and need for public accountability might increase or reduce the benefits of his information advantage.
If you use the “EA Funds” abstraction, then you’re treating giving money to the EA Long Term Future Fund managed by Nick Beckstead as the same sort of action as giving money to the EA Global Development fund managed by Elie Hassenfeld (which has largely given to GiveWell top charities). This seems obviously ridiculous to me if you have fine-grained enough opinions to have an opinion about which org’s priorities make more sense, and insofar as it doesn’t to you I’d like to hear why.
This doesn’t look to me like an argument that there is so much funging between EA Funds and GiveWell recommended charities that it’s odd to spend attention distinguishing between them? For people with some common sets of values (e.g. long-termist, placing lots of weight on the well-being of animals) it doesn’t seem like there’s a decision-relevant amount of funging between GiveWell recommendations and the EA Fund they would choose. Do we disagree about that?
I guess I interpreted Rob’s statement that “the EA Funds are usually a better fallback option than GiveWell” as shorthand for “the EA Fund relevant to your values is in expectation a better fallback option than GiveWell.” “The EA Fund relevant to your values” does seem like a useful abstraction to me.