I’m eager to see Eliezer’s planned reply to your “ETA2”, but in the meantime, here are a few of my own thoughts on this...
My guess is that movement-building and learning are still the best things to do right now for AI risk reduction. CEA, CFAR, and GiveWell are doing good movement-building, though the GiveWell crowd tends to be less interested in x-risk mitigation. GiveWell is doing a large share of the EA-learning, and might eventually (via GiveWell labs) do some of the x-risk learning (right now GiveWell has a lot of catching up to do on x-risk).
The largest share of the “explicit” x-risk learning is happening at or near FHI & MIRI, including e.g. Christiano. Lots of “implicit” x-risk learning is happening at e.g. NASA where it’s not clear that EA-sourced funding can have much marginal effect relative to the effect it could have on tiny organizations like MIRI and FHI.
My impression, which could be wrong, is that GiveWell’s ability to hire more researchers is not funding-limited but rather limited by management’s preference to offer lower salaries than necessary to ensure cause loyalty. (I would prefer GiveWell raise salaries and grow its research staff faster.) AMF could be fully funded relatively easily by Good Ventures or the Gates Foundation but maybe they’re holding back because this would be discouraging to the EA movement: small-scale donors requiring the high-evidence threshold met by GiveWell’s top charities would say “Well, I guess there’s nothing for little ’ol me to do here.” (There are other reasons they may be holding back, too.)
I think accelerating learning is more important right now than a DAF. Getting high-quality evidence about which x-risk mitigation efforts are worthwhile requires lots of work, but one thing we’ve learned in the past decade is that causes with high-quality evidence for their effectiveness tend to get funded, and this trend is probably increasing. The sooner we do enough learning to have high-quality evidence for the goodness of particular x-risk mitigation efforts, the sooner large funders will fund those efforts. Or, as Christiano writes:
To me it currently looks like the value of getting information faster is significantly higher than the value of money, and on the current margin I think most of these learning activities are underfunded.
And:
A relatively small set of activities seems to be responsible for most learning that is occurring (for example, much of GiveWell’s work, some work within the Centre for Effective Altruism, some strategy work within MIRI, hopefully parts of this blog, and a great number of other activities that can’t be so easily sliced up)
However, Paul thinks there are serious RFMF problems here:
A more serious concern is that there seems to currently be a significant deficit of human capital specialized for this problem and willing to work on it (without already being committed to work on it), so barring some new recruitment strategies (e.g. paying market wages for non-EAs to do EA strategy research) there are significant issues with room for more funding.
In contrast, I think there is plenty of room for more funding here, even without resorting to “paying market wages for non-EAs to do EA strategy research”:
MIRI could run more workshops and hire some able and willing FAI researchers, which I think is quite valuable for x-risk mitigation strategy learning apart from the object-level FAI progress it might produce. But even excluding this...
With more cash, FHI and CSER could host strategy-relevant conferences and workshops, and get people like Stuart Russell and Richard Posner to participate.
I have plenty of EAs capable of doing the labor-intensive data-gathering work needed for much of the strategy work, e.g. collecting data on how fast different parts of AI are progressing, how much money has gone into AI R&D each decade since the 60s, how ripple effects have worked historically, more IEM-relevant data like Katja’s tech report, etc. I just don’t have the money to pay them to do it.
FHI has lots more researcher-hours it could purchase if it had more cash.
Finally, a clarification: If I think movement-building and learning are most important right now, why is MIRI focused on math research this year? My views on this have shifted even since our 2013 strategy post, and I should note that Eliezer’s reasons for focusing on math research are probably somewhat different from mine.
In my estimation, MIRI’s focus on math research offers the following benefits to movement-building and learning:
Math research has better traction than strategic research with the world’s top cognitive ability. And once top talent is engaged by the math research, some of these top thinkers turn their attention to the strategic issues, too. (Historically true, not just speculation.)
Without an object-level research program on the most important problem (beneficent superintelligence), many of the best people just “bounce off” because there’s nothing for them to engage directly. (Historically true, not just speculation.)
And of course, FAI research tells us some things about how hard FAI research is, which lines of inquiry are tractable now, etc.
Your reasons for focusing on math research at MIRI seem sound, but I take it you’ve noticed the warning sign of finding that what you already decided to do turns out to be a good idea for different reasons than you originally thought?
Yes, though these reasons are pretty similar to the reasons that made me switch positions on strategy back when I thought a focus on strategic research would be best for MIRI in 2013.
My impression, which could be wrong, is that GiveWell’s ability to hire more researchers is not funding-limited but rather limited by management’s preference to offer lower salaries than necessary to ensure cause loyalty.
It’s clearly not funding-limited, as they have plenty of funding for operations. I’m less confident of the salaries explanation as to their difficulties hiring: it is quite plausible as a significant factor.
And the “give more money via GiveWell’s top charities to accelerate their approach to their limits of growth” rationale gets worse every year, as the contribution of a dollar to their money moved falls by half, and the number of potential doublings remaining falls. So I would not see direct donations to GiveWell or its top charities as competitive, although other interventions that bolstered it more effectively could.
Two plausible examples: 80,000 hours might deliver a number of good new hires to GiveWell, or the Effective Fundraising experimental project, inspired by discussion like this 80,000 hours blog post, may succeed in efficiently mobilizing non-EA funds to support GiveWell’s top charities.
Getting high-quality evidence about which x-risk mitigation efforts are worthwhile requires lots of work, but one thing we’ve learned in the past decade is that causes with high-quality evidence for their effectiveness tend to get funded, and this trend is probably increasing. The sooner we do enough learning to have high-quality evidence for the goodness of particular x-risk mitigation efforts, the sooner large funders will fund those efforts.
Yes.
I think accelerating learning is more important right now than a DAF.
One of the biggest virtues of a large “fund for the future,” IMHO, is that it would make it easier to start up new projects in the field separate from existing organizations if they could meet the (transparently announced) standards of the fund, with the process as transparent and productive of information as practicable, GiveWell style.
And it could serve those who think the existing organizations in the field are deficient in some remediable way (rather than having some general objection to all work in the area).
In contrast, I think there is plenty of room for more funding here, even without resorting to “paying market wages for non-EAs to do EA strategy research...[FHI RFMF to hire more academics/free up grant-related time from existing staff...MIRI math workshops]...[hiring people to collect and publish data on past AI predictions, past AI progress, past inputs into the AI field, etc...]
Good points that I’m largely on board with, qualitatively, although one needs to make more of a case to show they meet the bar of beating existing alternatives, or waiting for others to enter the field and do things better.
One of the biggest virtues of a large “fund for the future,” IMHO, is that it would make it easier to start up new projects in the field separate from existing organizations if they could meet the (transparently announced) standards of the fund, with the process as transparent and productive of information as practicable, GiveWell style.
And it could serve those who think the existing organizations in the field are deficient in some remediable way (rather than having some general objection to all work in the area).
That does sound good. Is there any ongoing progress on figuring out what those transparently announced standards could be, and how one might set up such a DAF? Are there such standards in place for the one in the UK?
The one in the UK is mainly functioning as a short term DAF along the lines of “direct the money as you intend, with trust of GWWC as a backstop” which is fine if you don’t want to delay disbursement until after you die.
Is there any ongoing progress on figuring out what those transparently announced standards could be, and how one might set up such a DAF?
Not yet, so far mainly discussions, e.g. with Paul, Rob Wiblin, Nick Beckstead, et al. I expect more from CEA on this (not wholly independently of my own actions).
I’m eager to see Eliezer’s planned reply to your “ETA2”, but in the meantime, here are a few of my own thoughts on this...
My guess is that movement-building and learning are still the best things to do right now for AI risk reduction. CEA, CFAR, and GiveWell are doing good movement-building, though the GiveWell crowd tends to be less interested in x-risk mitigation. GiveWell is doing a large share of the EA-learning, and might eventually (via GiveWell labs) do some of the x-risk learning (right now GiveWell has a lot of catching up to do on x-risk).
The largest share of the “explicit” x-risk learning is happening at or near FHI & MIRI, including e.g. Christiano. Lots of “implicit” x-risk learning is happening at e.g. NASA where it’s not clear that EA-sourced funding can have much marginal effect relative to the effect it could have on tiny organizations like MIRI and FHI.
My impression, which could be wrong, is that GiveWell’s ability to hire more researchers is not funding-limited but rather limited by management’s preference to offer lower salaries than necessary to ensure cause loyalty. (I would prefer GiveWell raise salaries and grow its research staff faster.) AMF could be fully funded relatively easily by Good Ventures or the Gates Foundation but maybe they’re holding back because this would be discouraging to the EA movement: small-scale donors requiring the high-evidence threshold met by GiveWell’s top charities would say “Well, I guess there’s nothing for little ’ol me to do here.” (There are other reasons they may be holding back, too.)
I think accelerating learning is more important right now than a DAF. Getting high-quality evidence about which x-risk mitigation efforts are worthwhile requires lots of work, but one thing we’ve learned in the past decade is that causes with high-quality evidence for their effectiveness tend to get funded, and this trend is probably increasing. The sooner we do enough learning to have high-quality evidence for the goodness of particular x-risk mitigation efforts, the sooner large funders will fund those efforts. Or, as Christiano writes:
And:
However, Paul thinks there are serious RFMF problems here:
In contrast, I think there is plenty of room for more funding here, even without resorting to “paying market wages for non-EAs to do EA strategy research”:
MIRI could run more workshops and hire some able and willing FAI researchers, which I think is quite valuable for x-risk mitigation strategy learning apart from the object-level FAI progress it might produce. But even excluding this...
With more cash, FHI and CSER could host strategy-relevant conferences and workshops, and get people like Stuart Russell and Richard Posner to participate.
I have plenty of EAs capable of doing the labor-intensive data-gathering work needed for much of the strategy work, e.g. collecting data on how fast different parts of AI are progressing, how much money has gone into AI R&D each decade since the 60s, how ripple effects have worked historically, more IEM-relevant data like Katja’s tech report, etc. I just don’t have the money to pay them to do it.
FHI has lots more researcher-hours it could purchase if it had more cash.
Finally, a clarification: If I think movement-building and learning are most important right now, why is MIRI focused on math research this year? My views on this have shifted even since our 2013 strategy post, and I should note that Eliezer’s reasons for focusing on math research are probably somewhat different from mine.
In my estimation, MIRI’s focus on math research offers the following benefits to movement-building and learning:
Math research has better traction than strategic research with the world’s top cognitive ability. And once top talent is engaged by the math research, some of these top thinkers turn their attention to the strategic issues, too. (Historically true, not just speculation.)
Without an object-level research program on the most important problem (beneficent superintelligence), many of the best people just “bounce off” because there’s nothing for them to engage directly. (Historically true, not just speculation.)
And of course, FAI research tells us some things about how hard FAI research is, which lines of inquiry are tractable now, etc.
Your reasons for focusing on math research at MIRI seem sound, but I take it you’ve noticed the warning sign of finding that what you already decided to do turns out to be a good idea for different reasons than you originally thought?
Yes, though these reasons are pretty similar to the reasons that made me switch positions on strategy back when I thought a focus on strategic research would be best for MIRI in 2013.
It’s clearly not funding-limited, as they have plenty of funding for operations. I’m less confident of the salaries explanation as to their difficulties hiring: it is quite plausible as a significant factor.
And the “give more money via GiveWell’s top charities to accelerate their approach to their limits of growth” rationale gets worse every year, as the contribution of a dollar to their money moved falls by half, and the number of potential doublings remaining falls. So I would not see direct donations to GiveWell or its top charities as competitive, although other interventions that bolstered it more effectively could.
Two plausible examples: 80,000 hours might deliver a number of good new hires to GiveWell, or the Effective Fundraising experimental project, inspired by discussion like this 80,000 hours blog post, may succeed in efficiently mobilizing non-EA funds to support GiveWell’s top charities.
Yes.
One of the biggest virtues of a large “fund for the future,” IMHO, is that it would make it easier to start up new projects in the field separate from existing organizations if they could meet the (transparently announced) standards of the fund, with the process as transparent and productive of information as practicable, GiveWell style.
And it could serve those who think the existing organizations in the field are deficient in some remediable way (rather than having some general objection to all work in the area).
Good points that I’m largely on board with, qualitatively, although one needs to make more of a case to show they meet the bar of beating existing alternatives, or waiting for others to enter the field and do things better.
Also, I should mention the Global Catastrophic Risks Institute, even if no one at the EA events in England mentioned it while I was there.
That does sound good. Is there any ongoing progress on figuring out what those transparently announced standards could be, and how one might set up such a DAF? Are there such standards in place for the one in the UK?
The one in the UK is mainly functioning as a short term DAF along the lines of “direct the money as you intend, with trust of GWWC as a backstop” which is fine if you don’t want to delay disbursement until after you die.
Not yet, so far mainly discussions, e.g. with Paul, Rob Wiblin, Nick Beckstead, et al. I expect more from CEA on this (not wholly independently of my own actions).