My impression, which could be wrong, is that GiveWell’s ability to hire more researchers is not funding-limited but rather limited by management’s preference to offer lower salaries than necessary to ensure cause loyalty.
It’s clearly not funding-limited, as they have plenty of funding for operations. I’m less confident of the salaries explanation as to their difficulties hiring: it is quite plausible as a significant factor.
And the “give more money via GiveWell’s top charities to accelerate their approach to their limits of growth” rationale gets worse every year, as the contribution of a dollar to their money moved falls by half, and the number of potential doublings remaining falls. So I would not see direct donations to GiveWell or its top charities as competitive, although other interventions that bolstered it more effectively could.
Two plausible examples: 80,000 hours might deliver a number of good new hires to GiveWell, or the Effective Fundraising experimental project, inspired by discussion like this 80,000 hours blog post, may succeed in efficiently mobilizing non-EA funds to support GiveWell’s top charities.
Getting high-quality evidence about which x-risk mitigation efforts are worthwhile requires lots of work, but one thing we’ve learned in the past decade is that causes with high-quality evidence for their effectiveness tend to get funded, and this trend is probably increasing. The sooner we do enough learning to have high-quality evidence for the goodness of particular x-risk mitigation efforts, the sooner large funders will fund those efforts.
Yes.
I think accelerating learning is more important right now than a DAF.
One of the biggest virtues of a large “fund for the future,” IMHO, is that it would make it easier to start up new projects in the field separate from existing organizations if they could meet the (transparently announced) standards of the fund, with the process as transparent and productive of information as practicable, GiveWell style.
And it could serve those who think the existing organizations in the field are deficient in some remediable way (rather than having some general objection to all work in the area).
In contrast, I think there is plenty of room for more funding here, even without resorting to “paying market wages for non-EAs to do EA strategy research...[FHI RFMF to hire more academics/free up grant-related time from existing staff...MIRI math workshops]...[hiring people to collect and publish data on past AI predictions, past AI progress, past inputs into the AI field, etc...]
Good points that I’m largely on board with, qualitatively, although one needs to make more of a case to show they meet the bar of beating existing alternatives, or waiting for others to enter the field and do things better.
One of the biggest virtues of a large “fund for the future,” IMHO, is that it would make it easier to start up new projects in the field separate from existing organizations if they could meet the (transparently announced) standards of the fund, with the process as transparent and productive of information as practicable, GiveWell style.
And it could serve those who think the existing organizations in the field are deficient in some remediable way (rather than having some general objection to all work in the area).
That does sound good. Is there any ongoing progress on figuring out what those transparently announced standards could be, and how one might set up such a DAF? Are there such standards in place for the one in the UK?
The one in the UK is mainly functioning as a short term DAF along the lines of “direct the money as you intend, with trust of GWWC as a backstop” which is fine if you don’t want to delay disbursement until after you die.
Is there any ongoing progress on figuring out what those transparently announced standards could be, and how one might set up such a DAF?
Not yet, so far mainly discussions, e.g. with Paul, Rob Wiblin, Nick Beckstead, et al. I expect more from CEA on this (not wholly independently of my own actions).
It’s clearly not funding-limited, as they have plenty of funding for operations. I’m less confident of the salaries explanation as to their difficulties hiring: it is quite plausible as a significant factor.
And the “give more money via GiveWell’s top charities to accelerate their approach to their limits of growth” rationale gets worse every year, as the contribution of a dollar to their money moved falls by half, and the number of potential doublings remaining falls. So I would not see direct donations to GiveWell or its top charities as competitive, although other interventions that bolstered it more effectively could.
Two plausible examples: 80,000 hours might deliver a number of good new hires to GiveWell, or the Effective Fundraising experimental project, inspired by discussion like this 80,000 hours blog post, may succeed in efficiently mobilizing non-EA funds to support GiveWell’s top charities.
Yes.
One of the biggest virtues of a large “fund for the future,” IMHO, is that it would make it easier to start up new projects in the field separate from existing organizations if they could meet the (transparently announced) standards of the fund, with the process as transparent and productive of information as practicable, GiveWell style.
And it could serve those who think the existing organizations in the field are deficient in some remediable way (rather than having some general objection to all work in the area).
Good points that I’m largely on board with, qualitatively, although one needs to make more of a case to show they meet the bar of beating existing alternatives, or waiting for others to enter the field and do things better.
Also, I should mention the Global Catastrophic Risks Institute, even if no one at the EA events in England mentioned it while I was there.
That does sound good. Is there any ongoing progress on figuring out what those transparently announced standards could be, and how one might set up such a DAF? Are there such standards in place for the one in the UK?
The one in the UK is mainly functioning as a short term DAF along the lines of “direct the money as you intend, with trust of GWWC as a backstop” which is fine if you don’t want to delay disbursement until after you die.
Not yet, so far mainly discussions, e.g. with Paul, Rob Wiblin, Nick Beckstead, et al. I expect more from CEA on this (not wholly independently of my own actions).