The non-spicy answer is probably the LTFF, if you’re happy deferring to the fund managers there. I don’t know what your risk tolerance for wasting money is, but you can check whether they meet it by looking at their track record.
If you have a lot of time you might be able to find better ways to spend money than the LTFF can. (Like if you can find a good way to fund intelligence amplification as Tsvi said).
My perspective is that I’m much more optimistic about policy than about technical research, and I don’t really feel qualified to evaluate policy work, and LTFF makes almost no grants on policy. I looked around and I couldn’t find any grantmakers who focus on AI policy. And even if they existed, I don’t know that I could trust them (like I don’t think Open Phil is trustworthy on AI policy and I kind of buy Habryka’s arguments that their policy grants are net negative).
I’m in the process of looking through a bunch of AI policy orgs myself. I don’t think I can do a great job of evaluating them but I can at least tell that most policy orgs aren’t focusing on x-risk so I can scratch them off the list.
Wanting to answer a very similar question, I’ve just done about a day of donation research into x-risk funds. There are three that have caught my interest:
in 2024, LTFF grants have gone mostly to individual TAIS researchers (also some policy folk and very small orgs) working on promising projects. Most are 3- to 12-month stipends between 10k$ and 100k$.
gives grants to orgs in AIS, biorisk and nuclear. funds both policy work (diplomacy, laws, advocacy) and technical work (TAIS research, technical bio-safety)
their grants cover things like US-China diplomacy efforts on nuclear, AI and autonomous weapons issues. also biorisk strategy and policy work.
A very rough estimate on LTFF effectiveness (how much p(doom) does $1 reduce?):
The article Microdooms averted by working on AI Safety uses a simple quantitative model to estimate that one extra AIS researcher will avert 49 microdooms on average at current margins. Considering only humanities current 8B people, this would mean 400,000 current people saved in expectation by each additional researcher. Note that depending on parameter choices, the model’s result could easily go up or down an order of magnitude.
The rest are my calculations:
optimistic case: the researcher has all their impact in the first year and only requires a yearly salary of $80k. This would imply 0.6 nanodooms / $ or 5 current people saved / $.
pessimistic case: the researcher takes 40 years (a full career) to have that impact and big compute and org. staffing costs mean their career costs 10x their salary. This implies a 400x lower effectiveness, i.e. 1.5 picodooms / $ or 0.012 current people saved / $ or 80$ to save a person.
For me at least, this actually looks like quite promising results! I now think of “Funding an extra AIS researcher” as a baseline to compare other X-risk interventions too.
One can do better than that: Finding and supporting especially talented researchers or ones working on especially promising avenues should be a lot more effective than funding the average AIS researcher. This is exactly what LTFF is doing right now.
The other two funds seem to focus more on finding and supporting especially promising policy efforts on the organizational level. Their picks seem to me as potentially even more promising than LTFF, but I currently have no way to model this so that’s just my current intuition.
I intend to start donating to one of these three funds as a consequence of these findings.
The non-spicy answer is probably the LTFF, if you’re happy deferring to the fund managers there. I don’t know what your risk tolerance for wasting money is, but you can check whether they meet it by looking at their track record.
If you have a lot of time you might be able to find better ways to spend money than the LTFF can. (Like if you can find a good way to fund intelligence amplification as Tsvi said).
My perspective is that I’m much more optimistic about policy than about technical research, and I don’t really feel qualified to evaluate policy work, and LTFF makes almost no grants on policy. I looked around and I couldn’t find any grantmakers who focus on AI policy. And even if they existed, I don’t know that I could trust them (like I don’t think Open Phil is trustworthy on AI policy and I kind of buy Habryka’s arguments that their policy grants are net negative).
I’m in the process of looking through a bunch of AI policy orgs myself. I don’t think I can do a great job of evaluating them but I can at least tell that most policy orgs aren’t focusing on x-risk so I can scratch them off the list.
How does someone view the actual outcomes of the ‘Highlighted Grants’ on that page?
It would be a lot more reassuring if readers can check that they’ve all been fulfilled and/or exceeded expectations.
Wanting to answer a very similar question, I’ve just done about a day of donation research into x-risk funds. There are three that have caught my interest:
Long Term Future Fund (LTFF) from EA Funds
in 2024, LTFF grants have gone mostly to individual TAIS researchers (also some policy folk and very small orgs) working on promising projects. Most are 3- to 12-month stipends between 10k$ and 100k$.
see their Grants Database for details
Emerging Challenges Fund (ECF) - Longview Philanthropy
gives grants to orgs in AIS, biorisk and nuclear. funds both policy work (diplomacy, laws, advocacy) and technical work (TAIS research, technical bio-safety)
see their 2024 Report for details
Global Catastrophic Risks Fund (GCR Fund) - Founders Pledge
focuses on prevention of great power conflicts
their grants cover things like US-China diplomacy efforts on nuclear, AI and autonomous weapons issues. also biorisk strategy and policy work.
A very rough estimate on LTFF effectiveness (how much p(doom) does $1 reduce?):
The article Microdooms averted by working on AI Safety uses a simple quantitative model to estimate that one extra AIS researcher will avert 49 microdooms on average at current margins.
Considering only humanities current 8B people, this would mean 400,000 current people saved in expectation by each additional researcher. Note that depending on parameter choices, the model’s result could easily go up or down an order of magnitude.
The rest are my calculations:
optimistic case: the researcher has all their impact in the first year and only requires a yearly salary of $80k. This would imply 0.6 nanodooms / $ or 5 current people saved / $.
pessimistic case: the researcher takes 40 years (a full career) to have that impact and big compute and org. staffing costs mean their career costs 10x their salary. This implies a 400x lower effectiveness, i.e. 1.5 picodooms / $ or 0.012 current people saved / $ or 80$ to save a person.
For me at least, this actually looks like quite promising results! I now think of “Funding an extra AIS researcher” as a baseline to compare other X-risk interventions too.
One can do better than that: Finding and supporting especially talented researchers or ones working on especially promising avenues should be a lot more effective than funding the average AIS researcher. This is exactly what LTFF is doing right now.
The other two funds seem to focus more on finding and supporting especially promising policy efforts on the organizational level. Their picks seem to me as potentially even more promising than LTFF, but I currently have no way to model this so that’s just my current intuition.
I intend to start donating to one of these three funds as a consequence of these findings.