What is the most effective way to donate to AGI XRisk mitigation?
There are now many organizations in field of Existential Risk from Artificial General Intelligence. I wonder which can make the most effective use of small donations.
My priority is on mathematical or engineering research aimed at XRisk from superhuman AGI.
My donations go to MIRI, and for now it looks best to me, but I will appreciate thoughtful assessment.
Machine Intelligence Research Institute pioneered AGI XRisk mitigation (alongside FHI, below) and does foundational research. Their approach aims to avoid a rush to implementation of an AGI which includes unknown failure modes.
Alignment Research Center: Paul Christiano’s new organization. He has done impressive research and has worked with both academia and MIRI.
Center for Human-Compatible Artificial Intelligence at Berkeley: If you’re looking to sponsor academia rather than an independent organization, this one does research that combines mainstream AI methods with serious consideration of XRisk.
Future of Humanity Institute at Oxford is a powerhouse in multiple relevant areas, including AGI XRisk Research.
The Centre for the Study of Existential Risk at Cambridge. Looks promising. I haven’t seen much on AGI XRisk from them.
Leverhulme Center for the Future of Intelligence, also at Cambridge and linked to CSER
Smaller organizations with scope that goes beyond AGI XRisk. I don’t know much about them otherwise.
Donating money to a grant-disbursing organization makes sense if you believe that they have better ability to determine effectiveness than you. Alternatively, you might be guided by their decisions as you make your own donations.
Survival and Flourishing Fund/Survival and Flourishing. org
Solenum Foundation (see here). Recently, Jaan Tallinn discussed a new initiative for an assessment pipeline that will aggregate expert opinion on the most effective organizations.
Future of Life Institute: It’s not clear if they still actively contribute to AI XRisk research, but did disburse grants a few years ago.
Are there others?
Quick thought: I expect that the most effective donation would be to organizations funding independent researchers, notably the LTFF.
Note that I’m an independent researcher funded by the LTFF (and Beth Barnes), but even if you told me that the money would never go to me, I would still think that.
Grants by organizations like that have a good track record for producing valuable research, as at least two people I think are among the most interesting thinkers on the topic (John S. Wentworth and Steve Byrnes) have gotten grants from sources like that (Steve is technically funded by Beth Barnes with money from the donor lottery), and others I’m really excited about (like Alex Turner) were helped by LTFF grants.
Such grants allow researchers to both bootstrap their careers, and also explore less incentivized subjects related to alignment at the start of their career.
They are cheaper than funding a hire for somewhere like MIRI, ARC or CHAI.
Thank you. Can you link to some of the better publications by Wentworth, Turner, and yourself? I’ve found mentions of each of you online but I’m not finding a canonical source for the recommended items.
I found this about Steve Byrnes
This about Beth Barnes
Sure.
For Alex Turner, his main sequence is the place to start.
For John Wentworth, he has a sequence on abstraction, and a lot of great content around it.
For Steve Byrnes, most of his work is on brain-based (or brain-inspired) AGI; see here, here and here for example
Personally, I feel like my best work is stuff I’m working on right now, but you can look at my sequence on goal-directedness and my sequence of distillations.
For a bit more funding information:
John’s LTFF grants: here and here.
Alex’s LTFF grants: here, here and here.
Steve post on his funding: here.
Reference to my funding: here and here.
You may be interested in Lark’s AI Alignment charity reviews. The only organization I would add is the Qualia Research Institute, which is my personal speculative pick for the highest impact organization, even though they don’t do alignment research. (They’re trying to develop a mathematical theory of consciousness and qualia.)
Thank you! That is valuable. I’d love to get also educated opinions on the quality of the research of some of these, with a focus on foundational or engineering research aimed at superhuman-AGI XRIsk (done mostly, I think, in MIRI, FHI, and by Christiano), but that article is great.
There may be many people working for top orgs (in the donor’s judgment) who are able to convert additional money to productivity effectively. This seems especially likely in academic orgs where the org probably faces strict restrictions on salaries. (But I won’t be surprised if it’s similarly the case for other orgs). So a private donor could solicit applications (with minimal form filling) from such people, and then distribute the donation between those who applied.
Gonna +1 the other comments that name the LTFF and Larks’ annual reviews. Though if I were to donate myself I’d probably go with a donor lottery. (The CEA donor lottery is not currently up alas.)
I agree that there are multiple types of basic research we might want to see, and maybe not all of them are getting done. I therefore actually put a somewhat decent effect size on traditional academic grants from places like FLI, even though most of its grants aren’t useful, because it seems like a way to actually get engineers to work on problems we haven’t thought of yet. This is the grant-disbursing process as an active ingredient, not just as filler. I am skeptical if this effect size is bigger on the margin than just increasing CHAI’s funding, but presumably we want some amount of diversification.
Thank you. Can you point me to a page on FLI’s latest grants? What I found was from a few years back. Is there another organizations whose grants are worthy of attention?
I actually haven’t heard anything out of them in the last few years either. My knowledge of grantmaking organizations is limited—I think similar organizations like Berkeley Existential Risk Initiative, or the Long-Term Future Fund, tend to be less about academic grantmaking and more about funding individuals and existing organizations (not that this isn’t also valuable).
Right on time, turns out there’s more grants—but now I’m not sure if these are academic-style or not (I guess we might see the recipients later). https://futureoflife.org/fli-announces-grants-program-for-existential-risk-reduction/?fbclid=IwAR3_pMQ0tDd_EOg_RShlLY8i71nGFliu0YH8kzbc7fClACEgxIo2uK6gPW8&cn-reloaded=1