A Qualitative Case for LTFF: Filling Critical Ecosystem Gaps

The longtermist funding ecosystem needs certain functions to exist at a reasonable scale. I argue LTFF should continue to be funded because we’re currently one of the only organizations comprehensively serving these functions. Specifically, we:

  • Fund early-stage technical AI safety researchers working outside major labs

  • Help talented people transition into existential risk work

  • Provide independent funding voices to balance out AI company influence

  • Move quickly to fund promising work in emerging areas, including “weirder” but potentially high-impact projects

Getting these functions right takes meaningful resources—well over $1M annually. This figure isn’t arbitrary: $1M funds roughly 10 person-years of work, split between supporting career transitions and independent research. Given what we’re trying to achieve—from maintaining independent AI safety voices to seeding new fields like x-risk focused information security—this is arguably a minimum viable scale.

While I’m excited to see some of these functions being taken up by others (especially in career development), I’m skeptical we’ll see comprehensive replacements anytime soon. My experience has been that proposed alternatives often die quickly or serve narrower functions than initially planned.

This piece fits into a broader conversation about LTFF’s effectiveness during the Forum’s Marginal Funding Week. I hope to publish more pieces in the coming weeks based on reader interest. Meanwhile, you can check out our 2024 and 2022-2023 payout reports, our past analyses of marginal grants, and (anonymized) grants we’ve narrowly rejected to better understand what LTFF does and why it matters.

Core Argument

To elaborate:

Something like LTFF ought to exist at a reasonable scale. We need funders who can:

  • Evaluate technical AI safety research from early-stage researchers working outside major institutions

  • Help talented people transition into working on existential risks, especially when their path doesn’t fit standard programs

  • Provide independent funding voices outside of major AI labs, particularly as AI companies increasingly influence the field

  • Move quickly to support promising work in emerging areas, without waiting for consensus

  • Fund “weirder” but potentially high-impact projects that other funders might consider too risky or unconventional

No other organization currently serves these functions at scale. While other funders do important work in adjacent spaces, they often have constraints that prevent them from filling these specific needs:

  • Most major funders have minimum grant sizes in the hundreds of thousands of dollars

  • Many restrict their focus to narrow, well-established research areas

  • Some have institutional conflicts that limit their independence

  • Others only open applications during specific windows or require extensive pre-existing track records

  • Many are too PR-sensitive to fund potentially controversial work

Therefore, LTFF should be funded. Until other organizations step up to fill these gaps (which I’d genuinely welcome), LTFF needs to continue serving these functions. And to serve them effectively, we need meaningful resources – as I’ll explain later, likely well over $1M annually.

I want to be clear: this isn’t meant to be a complete argument for funding LTFF. For that, you’d want to look at our marginal grants, create cost-effectiveness analyses, and/​or make direct comparisons with other funding opportunities. Rather, it’s an attempt to convey the intuition that something needs to fill these ecosystem gaps, and right now, LTFF is one of the only organizations positioned to do so.

In the following sections, I’ll walk through each of these key functions in detail, examining why they matter and how LTFF approaches them.

Key Functions Currently (Almost) Unique to LTFF

Technical AI Safety Funding

We have always been one of the biggest funders for GCR-focused technical AI safety, particularly for early-stage nonacademic researchers. Our rotating staff of part-time grantmakers always has multiple people who are highly engaged and knowledgeable about AI safety, usually including people actively working in the field.

Why aren’t other funders investing in GCR-focused technical AI Safety?

It’s at some level surprising, but despite being arguably one of EA’s most important subcause areas, no other funder really specializes in funding GCR-focused technical AI safety. For a significant fraction of technical AI Safety research work, we are the most obvious and sometimes only place people apply to.

For example:

  • Open Philanthropy’s AI safety work tends toward large grants in the high hundreds of thousands or low millions, meaning individuals and organizations with lower funding needs won’t be funded by them

    • They also in practice don’t fund much technical work in AI Safety, relative to the rest of their GCR portfolio

  • Schmidt Futures has an interesting new program for Safety Science of A—but they explicitly do not fund research into catastrophic risks

  • Other funders like UK AISI’s Systematic AI Safety Grants or OpenAI’s Fast Grants do emerge, but they’re frequently short-lived and again often have large minimum grant sizes

  • Manifund offers a platform for funding impactful projects including AI safety, but their DIY approach means there’s no guarantee that somebody with funding ability will evaluate any specific project. Additionally, their always-public approach means applicants who have private needs or contextual information can’t share it privately

(This list is non-exhaustive, for example I do not cover SFF or some of the European funders, which I know less about. In a later article I’d like to explicitly contrast LTFF’s approach on AI alignment and AI safety with that of other funders in the broader ecosystem).

Career Transitions and Early Researcher Funding

We’ve historically funded many people for career transitions who are interested in working on fields that we think are very important, but who are currently not able to directly productively work in those fields.

Given how long we’ve been around, and how young many of the fields we work in are, if our fieldbuilding efforts were successful, we should be able to observe some macrolevel effects. And I think we do observe this: I believe many people now productively working on existential risk owe a sizable fraction of their career start to LTFF. As an experiment, you can think of (especially junior) people who you respect in longtermism or existential risk reduction or AI Safety, and do an internet search for their name plus LTFF.

There are many components to successful field-building for a new research field. In particular, the user journey I’m imagining looks like: “somebody talented wants to work on x-risk (especially AI safety)” ->???-> “they are productively contributing useful work (esp research) in making the world better via x-risk reduction.”

LTFF tries to help people with an easier transition to the ”???” part, both historically and today.

Why aren’t other groups investing in improving career transitions in existential risk reduction?

The happy answer to this question is “now they do!” For example:

  • BlueDot Impact’s AI Safety Fundamentals course can help people transition from “interested or concerned about AI safety” to being knowledgeable about the fundamentals of the field

    • They also provide courses on Pandemics for people interested in biosecurity and pandemic preparedness

  • The MATS Program takes technically qualified and experienced people with some knowledge of the fundamentals and pairs them with an experienced mentor who can help guide them through writing their (first?) relevant research paper

  • As industry, academia, and government(s) becomes increasingly more interested in AI Safety and willing to hire people, junior researchers can be hired earlier in their career trajectory and have more on-the-job training and mentorship

In many ways, this is all really good from my perspective. Obviously, it alleviates our load and means we can focus on other pressing problems or bottlenecks. Further, to be frank the experience of being a grantee can often be quite poor. I’m sad it’s not better, but at the same time I’m optimistic that having more mentorship and a structured organizational umbrella probably adds a lot of value for trainees.

Going Forwards

We will continue to fund transition grants and early research grants. However, in the past 1-2 years, and I suspect even more so in the future, we will fund:

  • Fewer career transition grants of early-stage career people starting out

  • More grants for:

    • People who are successful in the middle of their careers and interested in a transition

    • People who have already received some training and can productively do useful research or other work now, even if they’d be even more productive working for a research-oriented organization

A good example of the latter would be MATS scholars/​trainees. In every MATS cohort, usually more than two-thirds of their scholars apply to us for funding, many/​most with competitive applications. This allows the scholars to both do directly useful research and build more of a track record as they apply to other jobs.

Providing (Some) Counterbalance to AI Companies on AI Safety

Right now, AI companies substantially influence the public conversation on AI risk. They also substantially influence the conversation on AI risk within the AI safety field. Right now, via their internal safety teams, AI companies pay for a large fraction of current work in the technical AI Safety field. This is great in some ways, but also limits the ability for independent voices to voice concerns, both directly and through various indirect channels (e.g. people angling for a job at XYZ AI company may become less inclined to criticize XYZ).

I think it is important to fund communication from a range of experts, to ensure that more perspectives are represented. It is arguably especially important to fund some technical safety research outside of the major AI labs, as a check on these labs: external researchers may be more comfortable voicing dissent with lab orthodoxy or pointing out if the major labs are acting irresponsibly.

Unfortunately, it is quite hard to form sufficient counterbalance to them. AI labs are of course extremely well-funded, and most of the other big funders in this space have real or perceived conflicts-of-Interest, like their donors or project leads having large investments in AI companies or strong social connections with lab leadership.

I don’t think the ability to be funded by LTFF is anywhere close to enough counterbalance. But having one funder stake out this position is much better than having zero, and I want us to take this responsibility seriously.

Going Forwards

In the medium term (say 3 to 5 years) my hope is that governments will take AI safety seriously enough to a) conduct their own studies and evaluations, b) hire their own people, c) fund academia and other independent efforts, and broadly attempt to serve as a check on AI corporate labs in the private sector, as governments are supposed to do anyway. Preventing/​reducing regulatory capture seems pretty important here, of course.

In the meantime I want both LTFF and other nonprofit funders in AI Safety to be aware of the dynamic of labs influencing AI Safety conversations, and take at least some active measures to correct for that.

Funding New Project Areas and Approaches

LTFF is often able to move relatively quickly to fund new project areas that have high expected impact. In many of those cases, other funders either aren’t interested or will take 6-24 months to ramp up.

As long as a project has a plausible case that it might be really good for the long-term future, we will read it and attempt to make an informed decision about whether it’s worth funding. This means we’ve funded rather odd projects in the past, and will likely continue to do so.

Importantly, when other funders abandon project areas and subcause areas for reasons other than expected impact, as Open Phil/​Good Ventures has recently announced, a well-funded LTFF can step in and take on some of the relevant funding needs.

Additionally, we have a higher tolerance for PR risks than most, and are thus able to fund a broader range of projects with higher expected impact.

Going Forwards

With sufficient funding, we can (and likely will) try to do active grantmaking to fund other areas that currently either do not have any funding or are operating at very small scales. For example, Caleb Parikh (EA Funds Project Lead and LTFF fund manager) is one of the few people working very actively on fieldbuilding for global catastrophic risk-focused information security. We can potentially expand a bunch of active grantmaking there.

We’d also potentially be excited to do a bunch of active grantmaking into AI Control.

We will continue to try to fund high-impact projects and subcause areas that other funders have temporarily abandoned.

Broader Funding Case

Why Current Funding Levels Matter

Let me try to give a rough sense of why LTFF likely needs >>$1M/​year to fulfill its key functions, rather than loosely gesture at “some number”. I acknowledge this is a large number with serious opportunity costs (about 200,000-250,000 delivered bednets, for example). But here’s the basic case:

  • $1M/​year translates to roughly 10 person-years of work annually at typical stipends (~$100k/​year)

  • The people we fund often have expensive counterfactuals:

  • Looking at our key functions:

    • Technical AI Safety: We need enough independent researchers to form meaningful counterbalance to labs

    • Career Transitions: Supporting even a small cohort of transitions requires significant resources

    • New Project Areas: Areas like infosec or AI control need multiple full-time people to make progress

  • If we split $1M roughly between a) field-building/​career transitions and b) supporting established researchers outside labs, we get:

    • ~5-10 people in training/​transition

    • 2-5 full-time independent researchers

  • That’s a minimum viable scale for our core functions, and arguably too small to achieve our broader goals.

  • We also likely want to fund other things, like new programs or non-AI cause areas

All together, it seems like if there are productive uses of money, it seems easy to see how effectively fulfilling all of the functionality of LTFF takes >$1M/​year.

Going Forwards

I genuinely hope that many of LTFF’s current functions will eventually be taken over by other organizations or become unnecessary. For example:

  • Governments might develop robust funding for independent AI safety research, internally, within academia, and with sponsored nonprofits

  • More structured programs might emerge for career transitions

  • New specialized funders might arise for specific cause areas

But I’m not confident this will happen soon, and I’m especially pessimistic that they will all happen soon in a comprehensive way. My experience has been that when other projects propose to replace LTFF’s functions:

  • Many die in their infancy

  • Some come and go quickly

  • Others serve important but narrower functions than initially planned

In January 2022 (when I first joined LTFF) there were serious talks of shutting LTFF down and having another organization serve our main functions, like maintaining continuously open applications for smaller grants. Nearly 3 years later, LTFF is still around, and many of the proposed replacement projects haven’t materialized.

Thus, I tentatively think we should:

  • Continue serving these functions until robust replacements exist

  • Stay flexible about which gaps most need filling

  • Maintain readiness to shift focus as the ecosystem evolves

  • Support rather than compete with new organizations trying to take on these roles

Conclusion

We need something like LTFF to exist and be reasonably well-funded. This need stems from critical ecosystem gaps—supporting early-stage technical AI safety research, helping talent transitions into existential risk work, providing independent voices outside major labs, and funding promising new areas quickly.

Right now, no other organization comprehensively serves these functions at scale. While I’m excited to see new programs emerging in specific areas, much of the full set of capabilities we need still primarily exists within LTFF. Until that changes—which I’d welcome! - LTFF needs meaningful resources to continue this work.

This isn’t a complete case for funding LTFF—that would require detailed cost-effectiveness analysis and comparison with other opportunities. But I hope I’ve conveyed why having these ecosystem functions matters, and why right now, LTFF is one of the few organizations positioned to provide them.

Appendix: LTFF’s Institutional Features

While our core case focuses on key ecosystem functions LTFF provides, there are several institutional features that enable us to serve these functions effectively:

Transparency and Communication

We aim to be a consistently candid funder, with frequent, public communications about our grants and decision-making processes. We often don’t hit this goal to a degree that satisfies us or the community. But we do more public communication than most longtermist funders and organizations, through:

Operational Features

We maintain several operational practices that support our core functions:

Continuous Open Applications

  • Year-round open application form for any long-term future focused proposal

  • Contrasts with funders that:

    • Don’t have open applications

    • Only open periodically

    • Restrict to narrow RFPs

  • In theory allows for emergency/​time-sensitive proposals, though in practice our response time varies

Institutional Memory

We’ve been around longer than most institutional funders in this space (though the space itself is quite young). This gives us valuable context about:

  • What approaches have been tried

  • How fields have evolved

  • Where persistent gaps remain

Risk Tolerance

  • Less concerned about PR risks than most funders

  • Able to fund potentially controversial work when expected impact justifies it

  • While we don’t seek PR risks to LTFF itself, we try to be cooperative with the broader community (EA, longtermism, AI safety, etc.) to avoid passing on reputation risks

Diverse Worldviews

  • Consider multiple theories of change rather than committed to narrow approaches

  • Rotating staff brings varied perspectives

  • Willing to bet on different approaches to improving the long-term future

Crossposted from EA Forum (17 points, 1 comment)