The AI safety community has grown rapidly since the ChatGPT wake-up call, but available funding doesn’t seem to have kept pace.
However, there’s a more recent dynamic that’s created even better funding opportunities, which I witnessed as a recommender in the most recent SFF grant round.[1]
Most philanthropic (vs. government or industry) AI safety funding (>50%) comes from one source: Good Ventures. But they’ve recently stopped funding several categories of work (my own categories, not theirs):
In addition, they are currently not funding (or not fully funding):
Many non-US think tanks, who don’t want to appear influenced by an American organisation (there’s now probably more than 20 of these).
They do fund technical safety non-profits like FAR AI, though they’re probably underfunding this area, in part due to difficulty hiring for this area the last few years (though they’ve hired recently).
Political campaigns, since foundations can’t contribute to them.
Organisations they’ve decided are below their funding bar for whatever reason (e.g. most agent foundations work). OP is not infallible so some of these might still be worth funding.
Nuclear security, since it’s on average less cost-effective than direct AI funding, so isn’t one of the official cause areas (though I wouldn’t be surprised if there were some good opportunities there).
This means many of the organisations in these categories have only been able to access a a minority of the available philanthropic capital (in recent history, I’d guess ~25%). In the recent SFF grant round, I estimate they faced a funding bar 1.5 to 3 times higher.
This creates a lot of opportunities for other donors: if you’re into one of these categories, focus on finding gaps there.
In addition, even among organisations that can receive funding from Good Ventures, receiving what’s often 80% of funding from one donor is an extreme degree of centralisation. By helping to diversify the funding base, you can probably achieve an effectiveness somewhat above Good Ventures itself (which is kinda cool given they’re a foundation with 20+ extremely smart people figuring out where to donate).
Open Philanthropy (who advise Good Ventures on what grants to make) is also large and capacity constrained, which means it’s relatively easy for them to miss small, new organisations (<$250k), individual grants, or grants that require speed. So smaller donors can play a valuable role by acting as “angel donors” who identify promising new organisations, and then pass them on to OP to scale up.
In response to the attractive landscape, SFF allocated over $19m of grants, compared to an initial target of $5 - $15m. However, that wasn’t enough to fill all the gaps.
SFF published a list of the organisations that would have received more funding if they’d allocated another $5m or $10m. This list isn’t super reliable, because less effort was put into thinking about this margin, but it’s a source of ideas.
Some more concrete ideas that stand out to me as worth thinking about are as follows (in no particular order):
SecureBio is one of the best biorisk orgs, especially for the intersection of AI and Bio. SFF gave $250k to the main org, but I would have been happy to see them get $1m.
If you’re a non-US person, consider funding AI governance non-profits in your locality e.g. CLTR is a leading UK think tank working on AI safety; CeSIA is trying to build the field in France, the Simon Institute is focused on the UN in Europe; and now many others. If you’re Chinese, there are interesting opportunities there that only Chinese citizens can donate to (you can email me).
Center for AI Safety and their political Action Fund. These are Dan Hendrycks’ organisations and have driven some of the bigger successes in AI policy and advises xAI. They’re not receiving money from OP. SFF gave $1.1m to CAIS and $1.6m to the action fund, but they could deploy more.
METR is perhaps the leading evals org and hasn’t received OP funding recently. They have funding in the short term but their compute budget is growing very rapidly.
Lightcone. LessWrong seems to have been cost-effective at movement building, and the Lightcone conference space also seems useful, though it’s more sensitive to your assessment of the value of Bay Area rationality community building. It’s facing a major funding shortfall.
MATS Research, Tarbell and Sam Hammond’s project within FAI could all use additional funds to host more fellows in their AI fellowship programmes. MATS has a strong track record (while the others are new). There’s probably diminishing returns to adding more fellows, but it still seems like a reasonable use of funding.
If you’re into high school outreach, Non-trivial has a $1m funding gap.
I’m not making a blanket recommendation to fund these organisations, but they seem worthy of consideration, and also hopefully illustrate a rough lower bound for what you could do with $10m of marginal funds. With some work, you can probably find stuff that’s even better.
I’m pretty uncertain how this situation is going to evolve. I’ve heard there some new donors starting to make larger grants (e.g. Jed McCaleb’s Navigation Fund). And as AI Safety becomes more mainstream I expect more donors to enter. Probably the most pressing gaps will be better covered in a couple of years. If that’s true, that means giving now could be an especially impactful choice.
In the future, there may also be opportunities to invest large amounts of capital in scalable AI alignment efforts, so it’s possible future opportunities will be even better. But there are concrete reasons to believe there are good opportunities around right now.
If you’re interested in these opportunities:
If you’re looking to give away $250k/year or more, reach out to Open Philanthropy, who regularly recommend grants to donors other than Good Ventures (donoradvisory@openphilanthropy.org).
It looks like there are some good funding opportunities in AI safety right now
Link post
The AI safety community has grown rapidly since the ChatGPT wake-up call, but available funding doesn’t seem to have kept pace.
However, there’s a more recent dynamic that’s created even better funding opportunities, which I witnessed as a recommender in the most recent SFF grant round.[1]
Most philanthropic (vs. government or industry) AI safety funding (>50%) comes from one source: Good Ventures. But they’ve recently stopped funding several categories of work (my own categories, not theirs):
Many Republican-leaning think tanks, such as the Foundation for American Innovation.
“Post-alignment” causes such as digital sentience or regulation of explosive growth.
The rationality community, including LessWrong, Lightcone, SPARC, CFAR, MIRI.
High school outreach, such as Non-trivial.
In addition, they are currently not funding (or not fully funding):
Many non-US think tanks, who don’t want to appear influenced by an American organisation (there’s now probably more than 20 of these).
They do fund technical safety non-profits like FAR AI, though they’re probably underfunding this area, in part due to difficulty hiring for this area the last few years (though they’ve hired recently).
Political campaigns, since foundations can’t contribute to them.
Organisations they’ve decided are below their funding bar for whatever reason (e.g. most agent foundations work). OP is not infallible so some of these might still be worth funding.
Nuclear security, since it’s on average less cost-effective than direct AI funding, so isn’t one of the official cause areas (though I wouldn’t be surprised if there were some good opportunities there).
This means many of the organisations in these categories have only been able to access a a minority of the available philanthropic capital (in recent history, I’d guess ~25%). In the recent SFF grant round, I estimate they faced a funding bar 1.5 to 3 times higher.
This creates a lot of opportunities for other donors: if you’re into one of these categories, focus on finding gaps there.
In addition, even among organisations that can receive funding from Good Ventures, receiving what’s often 80% of funding from one donor is an extreme degree of centralisation. By helping to diversify the funding base, you can probably achieve an effectiveness somewhat above Good Ventures itself (which is kinda cool given they’re a foundation with 20+ extremely smart people figuring out where to donate).
Open Philanthropy (who advise Good Ventures on what grants to make) is also large and capacity constrained, which means it’s relatively easy for them to miss small, new organisations (<$250k), individual grants, or grants that require speed. So smaller donors can play a valuable role by acting as “angel donors” who identify promising new organisations, and then pass them on to OP to scale up.
In response to the attractive landscape, SFF allocated over $19m of grants, compared to an initial target of $5 - $15m. However, that wasn’t enough to fill all the gaps.
SFF published a list of the organisations that would have received more funding if they’d allocated another $5m or $10m. This list isn’t super reliable, because less effort was put into thinking about this margin, but it’s a source of ideas.
Some more concrete ideas that stand out to me as worth thinking about are as follows (in no particular order):
SecureBio is one of the best biorisk orgs, especially for the intersection of AI and Bio. SFF gave $250k to the main org, but I would have been happy to see them get $1m.
If you’re a non-US person, consider funding AI governance non-profits in your locality e.g. CLTR is a leading UK think tank working on AI safety; CeSIA is trying to build the field in France, the Simon Institute is focused on the UN in Europe; and now many others. If you’re Chinese, there are interesting opportunities there that only Chinese citizens can donate to (you can email me).
Center for AI Safety and their political Action Fund. These are Dan Hendrycks’ organisations and have driven some of the bigger successes in AI policy and advises xAI. They’re not receiving money from OP. SFF gave $1.1m to CAIS and $1.6m to the action fund, but they could deploy more.
METR is perhaps the leading evals org and hasn’t received OP funding recently. They have funding in the short term but their compute budget is growing very rapidly.
Apollo Research has a budget in the millions but only received $250k from SFF. It’s the leading European evals group and did important recent work on o1.
Lightcone. LessWrong seems to have been cost-effective at movement building, and the Lightcone conference space also seems useful, though it’s more sensitive to your assessment of the value of Bay Area rationality community building. It’s facing a major funding shortfall.
MATS Research, Tarbell and Sam Hammond’s project within FAI could all use additional funds to host more fellows in their AI fellowship programmes. MATS has a strong track record (while the others are new). There’s probably diminishing returns to adding more fellows, but it still seems like a reasonable use of funding.
If you’re into high school outreach, Non-trivial has a $1m funding gap.
Further topping up the Manifund regranter programme or The AI Risk Mitigation fund (which specialise in smaller, often individual grants).
I’m not making a blanket recommendation to fund these organisations, but they seem worthy of consideration, and also hopefully illustrate a rough lower bound for what you could do with $10m of marginal funds. With some work, you can probably find stuff that’s even better.
I’m pretty uncertain how this situation is going to evolve. I’ve heard there some new donors starting to make larger grants (e.g. Jed McCaleb’s Navigation Fund). And as AI Safety becomes more mainstream I expect more donors to enter. Probably the most pressing gaps will be better covered in a couple of years. If that’s true, that means giving now could be an especially impactful choice.
In the future, there may also be opportunities to invest large amounts of capital in scalable AI alignment efforts, so it’s possible future opportunities will be even better. But there are concrete reasons to believe there are good opportunities around right now.
If you’re interested in these opportunities:
If you’re looking to give away $250k/year or more, reach out to Open Philanthropy, who regularly recommend grants to donors other than Good Ventures (donoradvisory@openphilanthropy.org).
Longview provides philanthropic advisory in this area, and also has a fund.
Otherwise reach out to some of the orgs I’ve mentioned and ask for more information, and ask around about them to make sure you’re aware of critiques.
If you just want a quick place to donate, pick one of these recommendations by Open Philanthropy staff or the Longview Fund.
I’m writing this in an individual capacity and don’t speak for SFF or Jaan Tallinn.