crossposted from answering a question on the EA Forum.
(My own professional opinions, other LTFF fund managers etc might have other views)
Hmm I want to split the funding landscape into the following groups:
LTFF
OP
SFF
Other EA/longtermist funders
Earning-to-givers
Non-EA institutional funders.
Everybody else
LTFF
At LTFF our two biggest constraints are funding and strategic vision. Historically it was some combination of grantmaking capacity and good applications but I think that’s much less true these days. Right now we have enough new donations to fund what we currently view as our best applications for some months, so our biggest priority is finding a new LTFF chair to help (among others) address our strategic vision bottlenecks.
Going forwards, I don’t really want to speak for other fund managers (especially given that the future chair should feel extremely empowered to shepherd their own vision as they see fit). But I think we’ll make a bid to try to fundraise a bunch more to help address the funding bottlenecks in x-safety. Still, even if we double our current fundraising numbers or so[1], my guess is that we’re likely to prioritize funding more independent researchers etc below our current bar[2], as well as supporting our existing grantees, over funding most new organizations.
(Note that in $ terms LTFF isn’t a particularly large fraction of the longtermist or AI x-safety funding landscape, I’m only talking about it most because it’s the group I’m the most familiar with).
Open Phil
I’m not sure what the biggest constraints are at Open Phil. My two biggest guesses are grantmaking capacity and strategic vision. As evidence for the former, my impression is that they only have one person doing grantmaking in technical AI Safety (Ajeya Cotra). But it’s not obvious that grantmaking capacity is their true bottleneck, as a) I’m not sure they’re trying very hard to hire, and b) people at OP who presumably could do a good job at AI safety grantmaking (eg Holden) have moved on to other projects. It’s possible OP would prefer conserving their AIS funds for other reasons, eg waiting on better strategic vision or to have a sudden influx of spending right before the end of history.
SFF
I know less about SFF. My impression is that their problems are a combination of a) structural difficulties preventing them from hiring great grantmakers, and b) funder uncertainty.
Other EA/Longtermist funders
My impression is that other institutional funders in longtermism either don’t really have the technical capacity or don’t have the gumption to fund projects that OP isn’t funding, especially in technical AI safety (where the tradeoffs are arguably more subtle and technical than in eg climate change or preventing nuclear proliferation). So they do a combination of saving money, taking cues from OP, and funding “obviously safe” projects.
Exceptions include new groups like Lightspeed (which I think is more likely than not to be a one-off thing), and Manifund (which has a regranters model).
Earning-to-givers
I don’t have a good sense of how much latent money there is in the hands of earning-to-givers who are at least in theory willing to give a bunch to x-safety projects if there’s a sufficiently large need for funding. My current guess is that it’s fairly substantial. I think there are roughly three reasonable routes for earning-to-givers who are interested in donating:
pooling the money in a (semi-)centralized source
choosing for themselves where to give to
saving the money for better projects later.
If they go with (1), LTFF is probably one of the most obvious choices. But LTFF does have a number of dysfunctions, so I wouldn’t be surprised if either Manifund or some newer group ends up being the Schelling donation source instead.
Non-EA institutional funders
I think as AI Safety becomes mainstream, getting funding from government and non-EA philantropic foundations becomes an increasingly viable option for AI Safety organizations. Note that direct work AI Safety organizations have a comparative advantage in seeking such funds. In comparison, it’s much harder for both individuals and grantmakers like LTFF to seek institutional funding[3].
I know FAR has attempted some of this already.
Everybody else
As worries about AI risk becomes increasingly mainstream, we might see people at all levels of wealth become more excited to donate to promising AI safety organizations and individuals. It’s harder to predict what either non-Moskovitz billionaires or members of the general public will want to give to in the coming years, but plausibly the plurality of future funding for AI Safety will come from individuals who aren’t culturally EA or longtermist or whatever.
crossposted from answering a question on the EA Forum.
(My own professional opinions, other LTFF fund managers etc might have other views)
Hmm I want to split the funding landscape into the following groups:
LTFF
OP
SFF
Other EA/longtermist funders
Earning-to-givers
Non-EA institutional funders.
Everybody else
LTFF
At LTFF our two biggest constraints are funding and strategic vision. Historically it was some combination of grantmaking capacity and good applications but I think that’s much less true these days. Right now we have enough new donations to fund what we currently view as our best applications for some months, so our biggest priority is finding a new LTFF chair to help (among others) address our strategic vision bottlenecks.
Going forwards, I don’t really want to speak for other fund managers (especially given that the future chair should feel extremely empowered to shepherd their own vision as they see fit). But I think we’ll make a bid to try to fundraise a bunch more to help address the funding bottlenecks in x-safety. Still, even if we double our current fundraising numbers or so[1], my guess is that we’re likely to prioritize funding more independent researchers etc below our current bar[2], as well as supporting our existing grantees, over funding most new organizations.
(Note that in $ terms LTFF isn’t a particularly large fraction of the longtermist or AI x-safety funding landscape, I’m only talking about it most because it’s the group I’m the most familiar with).
Open Phil
I’m not sure what the biggest constraints are at Open Phil. My two biggest guesses are grantmaking capacity and strategic vision. As evidence for the former, my impression is that they only have one person doing grantmaking in technical AI Safety (Ajeya Cotra). But it’s not obvious that grantmaking capacity is their true bottleneck, as a) I’m not sure they’re trying very hard to hire, and b) people at OP who presumably could do a good job at AI safety grantmaking (eg Holden) have moved on to other projects. It’s possible OP would prefer conserving their AIS funds for other reasons, eg waiting on better strategic vision or to have a sudden influx of spending right before the end of history.
SFF
I know less about SFF. My impression is that their problems are a combination of a) structural difficulties preventing them from hiring great grantmakers, and b) funder uncertainty.
Other EA/Longtermist funders
My impression is that other institutional funders in longtermism either don’t really have the technical capacity or don’t have the gumption to fund projects that OP isn’t funding, especially in technical AI safety (where the tradeoffs are arguably more subtle and technical than in eg climate change or preventing nuclear proliferation). So they do a combination of saving money, taking cues from OP, and funding “obviously safe” projects.
Exceptions include new groups like Lightspeed (which I think is more likely than not to be a one-off thing), and Manifund (which has a regranters model).
Earning-to-givers
I don’t have a good sense of how much latent money there is in the hands of earning-to-givers who are at least in theory willing to give a bunch to x-safety projects if there’s a sufficiently large need for funding. My current guess is that it’s fairly substantial. I think there are roughly three reasonable routes for earning-to-givers who are interested in donating:
pooling the money in a (semi-)centralized source
choosing for themselves where to give to
saving the money for better projects later.
If they go with (1), LTFF is probably one of the most obvious choices. But LTFF does have a number of dysfunctions, so I wouldn’t be surprised if either Manifund or some newer group ends up being the Schelling donation source instead.
Non-EA institutional funders
I think as AI Safety becomes mainstream, getting funding from government and non-EA philantropic foundations becomes an increasingly viable option for AI Safety organizations. Note that direct work AI Safety organizations have a comparative advantage in seeking such funds. In comparison, it’s much harder for both individuals and grantmakers like LTFF to seek institutional funding[3].
I know FAR has attempted some of this already.
Everybody else
As worries about AI risk becomes increasingly mainstream, we might see people at all levels of wealth become more excited to donate to promising AI safety organizations and individuals. It’s harder to predict what either non-Moskovitz billionaires or members of the general public will want to give to in the coming years, but plausibly the plurality of future funding for AI Safety will come from individuals who aren’t culturally EA or longtermist or whatever.