There are lots of great charitable giving opportunities out there right now.
The first time that I served as a recommender in the Survival and Flourishing Fund (SFF) was back in 2021. I wrote in detail about my experiences then. At the time, I did not see many great opportunities, and was able to give out as much money as I found good places to do so.
How the world has changed in three years.
I recently had the opportunity to be an SFF recommender for the second time. This time I found an embarrassment of riches. Application quality was consistently higher, there were more than twice as many applications, and essentially all applicant organizations were looking to scale their operations and spending.
That means the focus of this post is different. In 2021, my primary goal was to share my perspective on the process and encourage future SFF applications. Sharing information on organizations was a secondary goal.
This time, my primary focus is on the organizations. Many people do not know good places to put donations. In particular, they do not know how to use donations to help AI go better and in particular to guard against AI existential risk. Until doing SFF this round, I did not have any great places to point them towards.
(Not all the applications were about AI. There is also a lot of attention to biological existential and catastrophic risks, some to nuclear threats, and a number of applications that took entirely different approaches.)
Table of Contents
Organizations where I have the highest confidence in straightforward modest donations now, if your goals and model of the world align with theirs, are in bold.
Do Not Think Only On the Margin, and Also Use Decision Theory.
Organizations Focusing On AI Non-Technical Research and Education.
Organizations Doing Math, Decision Theory and Agent Foundations.
ALTER (Affiliate Learning-Theoretic Employment and Resources) Project.
MSEP Project at Science and Technology Futures (Their Website).
German Primate Center (DPZ) – Leibniz Institute for Primate Research.
Organizations That then Regrant to Fund Other Organizations.
Centre for Enabling Effective Altruism Learning & Research (CEELAR).
Principles of Intelligent Behavior in Biological and Social Systems (PIBBSS).
A Word of Warning
The SFF recommender process is highly time constrained. Even though I used well beyond the number of required hours, there was no way to do a serious investigation of all the potentially exciting applications. Substantial reliance on heuristics was inevitable.
Also your priorities, opinions, and world model could be very different from mine.
If you are considering donating a substantial amount of money, please do the level of personal research and consideration commensurate with the amount of money you want to give away.
If you are considering donating a small amount of money, or if the requirement to do personal research might mean you don’t donate to anyone at all, I caution the opposite: Only do the amount of optimization and verification and such that is worth its opportunity cost. Do not let the perfect be the enemy of the good.
For more details of how the SFF recommender process works, see my post on the process.
In addition, note that donations to some of the organizations below may not be tax deductible.
Use Your Personal Theory of Impact
Do not let me, or anyone else, tell you any of:
What is important or what is a good cause.
What types of actions are best to make the change you want to see in the world.
What particular strategies seem promising to you.
That you have to choose according to some formula or you’re an awful person.
This is especially true when it comes to policy advocacy, and especially in AI.
If an organization is advocating for what you think is bad policy, don’t fund them!
If an organization is advocating or acting in a way you think is ineffective, don’t fund them!
Only fund people you think advance good changes in effective ways. Not cases where I think that. Cases where you think that.
Briefly on my own prioritization right now (but again you should substitute your own): I chose to deprioritize all meta-level activities and talent development, because of how much good object-level work I saw available to do, and because I expected others to often prioritize talent and meta activities. I was largely but not exclusively focused on those who in some form were helping ensure AI does not kill everyone. And I saw high value in organizations that were influencing lab or government AI policies in the right ways, and continue to value Agent Foundations style and other off-paradigm technical research approaches.
Use Your Local Knowledge
I believe that the best places to give are the places where you have local knowledge.
If you know of people doing great work or who could do great work, based on your own information, then you can fund and provide social proof for what others cannot.
The less legible to others the cause, the more excited you should be to step forward, if the cause is indeed legible to you. This keeps you grounded, helps others find the show (as Tyler Cowen says), is more likely to be counterfactual funding, and avoids information cascades or looking under streetlights for the keys.
Most importantly it avoids adverse selection. The best legible opportunities for funding, the slam dunk choices? Those are probably getting funded. The legible things that are left are the ones that others didn’t sufficiently fund yet.
If you know why others haven’t funded, because they don’t know about the opportunity?
That’s a great trade.
Unconditional Grants to Worthy Individuals Are Great
The process of applying for grants, raising money, and justifying your existence sucks.
A lot.
It especially sucks for many of the creatives and nerds that do a lot of the best work.
If you have to periodically go through this process, and are forced to continuously worry about making your work legible and how others will judge it, that will substantially hurt your true productivity. At best it is a constant distraction. By default, it is a severe warping effect. A version of this phenomenon is doing huge damage to academic science.
As I noted in my AI updates, the reason this blog exists is that I received generous, essentially unconditional, anonymous support to ‘be a public intellectual’ and otherwise pursue whatever I think is best. My benefactors offer their opinions when we talk because I value their opinions, but they never try to influence my decisions, and I feel zero pressure to make my work legible in order to secure future funding.
As for funding my non-Balsa work further, I am totally fine for money, but I could definitely find ways to put a larger budget to work, and shows of support are excellent for morale.
If you have money to give, and you know individuals who should clearly be left to do whatever they think is best without worrying about raising money, then giving them unconditional grants is a great use of funds, including giving them ‘don’t worry about reasonable expenses’ levels of funding.
This is especially true when combined with ‘retrospective funding,’ based on what they have already done.
Not as unconditionally, it’s also great to fund specific actions and projects and so on that you see not happening purely through lack of money, especially when no one is asking you for money.
Do Not Think Only On the Margin, and Also Use Decision Theory
Resist the temptation to think purely on the margin, asking only what one more dollar can do. The incentives get perverse quickly, as organizations are rewarded for putting their highest impact activities in peril. Organizations that can ‘run lean’ or protect their core activities get punished.
Also, you want to do some amount of retrospective funding. If people have done exceptional work in the past, you should be willing to give them a bunch more rope in the future, above and beyond the expected value of their new project.
Don’t make everyone constantly reprove their cost effectiveness each year, or at least give them a break. If someone has earned your trust, then if this is the project they want to do next, presume they did so because of reasons, although you are free to disagree with those reasons.
And the Nominees Are
Time to talk about the organizations themselves.
Rather than offer precise rankings, I divided by cause category and into three confidence levels.
High confidence means I have enough information to be confident the organization is at least a good pick.
Medium or low confidence means exactly that – I have less confidence that the choice is wise, and you should give more consideration to doing your own research.
Low confidence is still high praise, and very much a positive assessment! The majority of SFF applicants did not make the cut, and they had already undergone selection to get that far.
If an organization is not listed, that does not mean I think they would be a bad pick – they could have asked not to be included, or I could be unaware of them or their value. I know how Bayesian evidence works, but this post is not intended as a knock on anyone, in any way. Some organizations that are not here would doubtless have been included, if I’d had more time.
I try to give a sense of how much detailed investigation and verification I was able to complete, and what parts I have confidence in versus not. Again, my lack of confidence will often be purely about my lack of time to get that confidence.
Indeed, unless I already knew them from elsewhere, assume no organizations here got as much attention as they deserve before you decide on what for you is a large donation.
I’m tiering based on how I think about donations from you, from outside SFF.
I think the regranting organizations were clearly wrong choices from within SFF, but are reasonable picks if you don’t want to do extensive research, especially if you are giving small.
In terms of funding levels needed, I will similarly divide into three categories.
They roughly mean this, to the best of my knowledge:
Low: Could likely be fully funded with less than ~$250k.
Medium: Could plausibly be fully funded with between ~$250k and ~$2 million.
High: Could probably make good use of more than ~$2 million.
These numbers may be obsolete by the time you read this. If you’re giving a large amount relative to what they might need, you might want to check with the organization first.
A lot of organizations are scaling up rapidly, looking to spend far more money than they have in the past. Everyone seems eager to double their headcount. But I’m not putting people into the High category unless I am confident they can scalably absorb more funding (although some may have now already raised that funding, so again check on that to be sure).
The person who I list as the leader of an organization will sometimes accidentally be whoever was in charge of fundraising rather than strictly the leader. Partly the reason for listing it is to give context and some of you can go ‘oh right, I know who that is,’ and the other reason is that all organization names are often highly confusing – adding the name of the organization’s leader allows you a safety check, to confirm that you are indeed pondering the same organization I am thinking of!
Organizations that Are Literally Me
This is my post, so I get to list Balsa Research first. (I make the rules here.)
If that’s not what you’re interested in, you can of course skip the section.
Balsa Research
Focus: Groundwork starting with studies to allow repeal of the Jones Act
Leader: Zvi Mowshowitz
Funding Needed: Low
Confidence Level: High
Our first target will be the Jones Act. We’re commissioning studies on its true costs, and the plan is to do more of them, and also do things like draft model repeals and explore ways to assemble a coalition and to sell and spread the results, to enable us to have a chance at repeal. Other planned cause areas include NEPA reform and federal housing policy (to build more housing where people want to live). We have one full time worker on the case.
I don’t intend to have it work on AI or assist with my other work, or to take personal compensation, unless I get donations that are dedicated to those purposes. The current fundraiser is to pay for academic studies on the Jones Act, full stop.
The pitch for Balsa, and the reason I am doing it, is in two parts.
I believe Jones Act repeal and many other abundance agenda items are neglected, tractable and important. That the basic work that needs doing is not being done, it would be remarkably cheap to do a lot of it and do it well, and that this would give us a real if unlikely chance to get a huge win if circumstances break right.
I also believe that if people do not have hope for the future, do not have something to protect and fight for, or do not think good outcomes are possible, that people won’t care about protecting the future. And that would be very bad, because we are going to need to fight to protect our future if we want to have one, or have a good one.
You got to give them hope.
I could go on, but I’ll stop there.
Don’t Worry About the Vase
Focus: Zvi Mowshowitz writes a lot of words, really quite a lot.
Leader: Zvi Mowshowitz
Funding Needed: Strictly speaking none, but it all helps
Confidence Level: High
You can also of course always donate directly to my favorite charity.
By which I mean me. I always appreciate your support, however large or small.
Thanks to generous anonymous donors, I am able to write full time and mostly not worry about money. That is what makes this blog possible. I want to as always be 100% clear: I am totally, completely fine as is, as is the blog.
Please feel zero pressure here, as noted throughout there are many excellent donation opportunities out there.
Additional funds are still welcome. There are levels of funding beyond not worrying. Such additional support is always highly motivating, and also there are absolutely additional things I could throw money at to improve the blog, potentially including hiring various forms of help or even expanding to more of a full news operation or startup.
The easiest way to help (of course) is a Substack subscription or Patreon. If you want to go large then reach out to me.
Organizations Focusing On AI Non-Technical Research and Education
As a broad category, these are organizations trying to figure things out regarding AI existential risk, without centrally attempting to either do technical work or directly to influence policy and discourse.
The Scenario Project
Focus: AI forecasting research projects, governance research projects, and policy engagement, in that order.
Leader: Daniel Kokotajlo, with Eli Lifland
Funding Needed: Medium
Confidence Level: High
Of all the ‘shut up and take my money’ applications, even before I got to participate in their tabletop wargame exercise, I judged this the most ‘shut up and take my money’-ist. At The Curve, I got to participate in the exercise and participate in discussions around it, and I’m now even more confident this is an excellent pick.
I like it going forward, and it is a super strong case for retroactive funding as well. Daniel walked away from OpenAI, and what looked to be most of his net worth, to preserve his right to speak up.
That led to us finally allowing others at OpenAI to speak up as well. This is how he wants to speak up, and try to influence what is to come, based on what he knows. I don’t know if it would have been my move, but the move makes a lot of sense. We need to back his play.
Lightcone Infrastructure
Focus: Rationality community infrastructure, LessWrong, AF and Lighthaven.
Leaders: Oliver Habryka, Raymond Arnold, Ben Pace
Funding Needed: High
Confidence Level: High
Disclaimer: I am on the CFAR board, and my writing appears on LessWrong and I have long time relationships with everyone involved, and have been to several great workshops or conferences at their campus at Lighthaven, so I was conflicted here.
I think they are doing great work and are worthy of support. There is a large force multiplier here (although that is true of a number of other organizations I list as well).
Certainly I think that if LessWrong, the Alignment Forum or the venue Lighthaven were unable to continue, especially LessWrong, that would be a major, quite bad unforced error, and I am excited by their proposed additional projects. Certainly the marginal costs here, while large (~$3 million per year), seem worthwhile to me, and far less than the fixed costs already paid.
Lightcone had been in a tricky spot for a while, because it got sued by FTX, and that made it very difficult to fundraise until it was settled, and also the settlement cost a lot of money, and OpenPhil is unwilling to fund Lightcone despite its recommenders finding Lightcone highly effective.
Now that the settlement is done, fundraising has to resume and the coffers need to be rebuilt.
Effective Institutions Project (EIP)
Focus: AI governance, advisory and research, finding how to change decision points
Leader: Ian David Moss
Funding Needed: Medium
Confidence Level: High
Can they indeed identify ways to target key decision points, and make a big difference? One can look at their track record. I’ve been asked to keep details confidential, but based on my assessment of private information, I confirmed they’ve scored some big wins and will plausibly continue to be able to have high leverage and punch above their funding weight. It seems important that they be able to continue their work.
Artificial Intelligence Policy Institute (AIPI)
Focus: Polls about AI
Leader: Daniel Colson
Funding Needed: Medium
Confidence Level: High
All those polls about how the public thinks about AI, including SB 1047? These are the people that did that. Without them, no one would be asking those questions. Ensuring that someone is asking is super helpful. With some earlier polls I was a bit worried that the wording was slanted, and that will always be a concern with a motivated pollster, but I think recent polls have been much better at this, and been reasonably close to neutral.
There are those who correctly point out that the public’s opinions are weakly held and low salience for now, and that all you’re often picking up is ‘the public does not like AI and it likes regulation.’ Fair enough, but someone still has to show this, and show it applies here, and put a lie to people claiming the public goes the other way.
Psychosecurity Ethics at EURAIO
(Link goes to EURAIO, this is specifically about Psychosecurity Ethics)
Focus: Summits to discuss AI respecting civil liberties and not using psychological manipulation or eroding autonomy.
Leader: Neil Watson
Funding Needed: None Right Now
Confidence Level: High
Not everything needs to be focused on purely existential risk, and even though they don’t need funding right now they probably will in the future, so I wanted to mention Psychosecurity Ethics anyway. Plenty of other things can go wrong too, and few people are thinking about many of the potential failure modes. I was excited to help this get funded, as it seems like a super cheap, excellent way to put more focus on these questions, and provides something here for those skeptical of existential concerns.
Pallisade Research
Focus: AI capabilities demonstrations to inform decision makers
Leader: Jeffrey Ladish
Funding Needed: Medium
Confidence Level: High
This is clearly an understudied approach. People need concrete demonstrations. Every time I get to talking with people in national security or otherwise get closer to decision makers who aren’t deeply into AI and in particular into AI safety concerns, you need to be as concrete and specific as possible – that’s why I wrote Danger, AI Scientist, Danger the way I did. We keep getting rather on-the-nose fire alarms, but it would be better if we could get demonstrations even more on the nose, and get them sooner, and in a more accessible way. I have confidence that Jeffrey is a good person to put this plan into action.
To donate, email donate@palisaderesearch.org.
AI Safety Info (Robert Miles)
Focus: Making YouTube videos about AI safety, starring Rob Miles
Leader: Rob Miles
Funding Needed: Low
Confidence Level: High
I think these are pretty great videos in general, and given what it costs to produce them we should absolutely be buying their production. If there is a catch, it is that I am very much not the target audience, so you should not rely too much on my judgment of what is and isn’t effective video communication on this front, and you should confirm you like the cost per view.
To donate, join his patreon or contact him directly.
Intelligence Rising
Focus: Facilitation of the AI scenario planning game Intelligence Rising.
Leader: Caroline Jeanmaire
Funding Needed: Low
Confidence Level: High
I haven’t had the opportunity to play Intelligence Rising, but I have read the rules to it, and heard a number of excellent after action reports (AARs), and played Daniel Kokotajlo’s version. The game is clearly solid, and it would be good if they continue to offer this experience and if more decision makers play it.
To donate, reach out to team@intelligencerising.org.
Convergence Analysis
Focus: A series of sociotechnical reports on key AI scenarios, governance recommendations and conducting AI awareness efforts.
Leader: David Kristoffersson
Funding Needed: Medium (for funding their Scenario Planning only)
Confidence Level: Medium
They have three tracks.
I am not so interested in their Governance Research and AI Awareness tracks, where I believe there are many others, some of which seem like better bets.
Their Scenario Planning track is more exciting. It is not clear who else is doing this work, and having concrete scenarios to consider and point to, and differentiate between, seems highly valuable. If that interests you, I would check out their reports in this area, and see if you think they’re doing good work.
Their donation page is here.
Longview Philanthropy
Focus: Conferences and advice on x-risk for those giving >$1 million per year
Leader: Simran Dhaliwal
Funding Needed: Medium
Confidence Level: Low
They also do some amount of direct grantmaking, but are currently seeking funds for their conferences. They involve top experts including Hinton and Bengio and by several accounts put on strong conferences. The obvious question is why, given all those giving so much, why this isn’t able to self-fund, and I am always nervous about giving money to those who focus on getting others to in turn give more money, as I discussed last time. I presume this does successfully act as a donation multiplier, if you are more comfortable than I am with that sort of strategy.
To inquire about donating, submit a query using the contact form at the bottom of their website.
Organizations Focusing Primary On AI Policy and Diplomacy
Some of these organizations also look at bio policy or other factors, but I judge those here as being primarily concerned with AI.
In this area, I am especially keen to rely on people with good track records, who have shown that they can build and use connections and cause real movement. It’s so hard to tell what is and isn’t effective, otherwise. Often small groups can pack a big punch, if they know where to go, or big ones can be largely wasted – I think that most think tanks on most topics are mostly wasted even if you believe in their cause.
Center for AI Safety and the CAIS Action Fund
Focus: AI research, field building and advocacy
Leaders: Dan Hendrycks
Funding Needed: High
Confidence Level: High
They played a key role in SB 1047 getting this far, they did the CAIS Statement on AI Risk, and in many other ways they’ve clearly been punching well above their weight in the advocacy space. The other arms are no slouch either, lots of great work here.
If you want to focus on their policy, then you can fund their 501c(4), the Action Fund, since 501c(3)s are limited in how much they can spend on political activities, keeping in mind the tax implications of that..
It would be pretty crazy if we didn’t give them the funding they need.
MIRI
Focus: At this point, primarily AI policy advocacy, plus some research
Leaders: Malo Bourgon, Eliezer Yudkowsky
Funding Needed: High
Confidence Level: High
MIRI, concluding that it is highly unlikely alignment will make progress rapidly enough otherwise, has shifted its strategy to largely advocate for major governments coming up with an international agreement to halt AI progress and to do communications, although research still looks to be a large portion of the budget, and they have dissolved its agent foundations team. That is not a good sign for the world, but it does reflect their beliefs.
They have accomplished a lot, and I strongly believe they should be funded to continue to fight for a better future however they think is best, even when I disagree with their approach.
This is very much a case of ‘do this if and only if this aligns with your model and preferences.’
Foundation for American Innovation (FAI)
Focus: Tech policy research, thought leadership, educational outreach to government
Leader: Grace Meyer
Funding Needed: Medium
Confidence Level: High
FAI is centrally about innovation. I am all for innovation in most situations as well. Innovation is good, actually, as is building things and letting people do things. But in AI people calling for ‘supporting innovation’ are often using that as an argument against all regulation of AI, and indeed I am dismayed to see so many push so hard on this exactly in the one place I think they are deeply wrong – we could work together on it almost anywhere else.
Indeed, their Chief Economist and resident AI studier Samuel Hammond, who launched their AI safety advocacy efforts in April 2023, initially opposed SB 1047, after revisions moving to what I interpret as a neutral position, and I famously had some strong disagreements with his 95 theses on AI although I agreed far more than I disagreed, and have many disagreements with AI and Leviathan as well.
Yet here they are rather high on the list. I have strong reasons to believe that we are closely aligned on key issues including compute governance, and private reasons to believe that FAI has been effective and we can expect that to continue, and its other initiatives also seem good. We don’t have to agree on everything else, so long as we all want good things and are trying to figure things out, and I’m confident that is the case here.
I am especially excited that they can speak to the Republican side of the aisle in the R’s native language, which is difficult for most in this space to do.
An obvious caveat is that if you are not interested in the non-AI pro-innovation part of the agenda (I certainly approve, but it’s not obviously a high funding priority for most readers) then you’ll want to ensure it goes where you want it.
Center for AI Policy (CAIP)
Focus: Lobbying Congress to adapt mandatory AI safety standards
Leader: Jason Green-Lowe
Funding Needed: Medium
Confidence Level: High
They’re a small organization starting out. Their biggest action so far has been creating a model AI governance bill, which I reviewed in depth. Other than too-low compute thresholds throughout, their proposal was essentially ‘the bill people are hallucinating when they talk about SB 1047, except very well written.’ I concluded it was a very thoughtful model bill, written to try and do a specific thing. Most of its choices made a lot of sense, and it is important work to have a bill like that already drafted and ready to go. There are a lot of futures where we don’t get a bill until some catastrophic event or other catastrophe, and then we suddenly pass something in a hurry.
Encode Justice
Focus: Youth activism on AI safety issues
Leader: Sneha Revanur
Funding Needed: Medium
Confidence Level: High
They have done quite a lot on a shoestring budget by using volunteers, helping with SB 1047 and in several other places. Now they are looking to turn pro, and would like to not be on a shoestring. I think they have clearly earned that right. The caveat is risk of ideological capture. Youth organizations tend to turn to left wing causes.
The risk here is that this effectively turns mostly to AI ethics concerns. It’s great that they’re coming at this without having gone through the standard existential risk ecosystem, but that also heightens the ideological risk. I think it’s still worth it.
To donate, go here.
The Future Society
Focus: AI governance standards and policy.
Leader: Caroline Jeanmaire
Funding Needed: Medium
Confidence Level: High
I’ve seen credible sources saying they do good work, and that they substantially helped orient the EU AI Act to at least care at all about frontier general AI. The EU AI Act was not a good bill, but it could easily have been a far worse one, doing much to hurt AI development while providing almost nothing useful for safety. We should do our best to get some positive benefits out of the whole thing.
They’re also active around the world, including the USA and China.
Safer AI
Focus: Specifications for good AI safety, also directly impacting EU AI policy
Leader: Simeon Campos
Funding Needed: Medium
Confidence Level: High
I’ve known Simeon for a while. I am impressed. He knows his stuff, he speaks truth to power. He got good leverage during the EU AI Act negotiations, does a bunch of good invisible background stuff, and in this case I am in position to know about some of it. I definitely want to help him cook.
To donate, go here.
Institute for AI Policy and Strategy (IAPS)
Focus: Papers and projects for ‘serious’ government circles, meetings with same.
Leader: Peter Wildeford
Funding Needed: High
Confidence Level: Medium
I have a lot of respect for Peter Wildeford, and they’ve clearly put in good work and have solid connections down, including on the Republican side where better coverage is badly needed. My verification level on degree of impact here (past and projected) is less definite here than with some of the High-level similar orgs, but they are clearly doing the thing, and this clearly crosses the ‘should be funded’ line in a sane world.
To donate, go here.
AI Standards Lab
Focus: Accelerating the writing of AI safety standards
Leaders: Ariel Gil and Jonathan Happel
Funding Needed: Medium
Confidence Level: Medium
They help facilitate the writing of AI safety standards, for EU/UK/USA. They have successfully gotten some of their work officially incorporated, and another recommender with a standards background was impressed by the work and team. This is one of the many things that someone has to do, and where if you step up and do it and no one else does that can go pretty great. Having now been involved in bill minutia myself, I know it is thankless work, and that it can really matter.
To donate, reach out to inquiries@aistandardslab.org.
Safer AI Forum
Focus: International AI safety conferences
Leader: Fynn Heide and Conor McGurk
Funding Needed: Medium
Confidence Level: Medium
They run the IDAIS series of conferences, including successful ones involving China. I do wish I had a better model of what makes such a conference actually matter versus not mattering, but these sure seem like they should matter, and certainly well worth their costs to run them.
CLTR at Founders Pledge
Focus: UK Policy Think Tank focusing on ‘extreme AI risk and biorisk policy.’
Leader: Angus Mercer
Funding Needed: High
Confidence Level: Medium
The UK has shown promise in its willingness to shift its AI regulatory focus to frontier models in particular. It is hard to know how much of that shift to attribute to any particular source, or otherwise measure how much impact there has been or might be on final policy.
They have endorsements of their influence from among others Toby Ord, Former Special Adviser to the UK Prime Minister Logan Graham and Senior Policy Adviser Nitarshan Rajkumar.
I reached out to a source with experience in the UK government who I trust, and they reported back they are a fan and pointed to some good things they’ve helped with. There was a general consensus that they do good work, and those who investigated where impressed.
The concern is that their funding needs are high, and they are competing against many others in the policy space, many of which have very strong cases. But they seem clearly like a solid choice.
To donate, go here.
Pause AI and Pause AI Global
Focus: Advocating for a pause on AI, including via in-person protests
Leader: Holly Elmore
Funding Level: Medium
Confidence Level: Medium
Some people say that those who believe we should pause AI would be better off staying quiet about it, rather than making everyone look foolish. Even though I very much do not think outright pausing AI is anything close to our first best policy at the moment, I think that those who believe we should pause AI should stand up and say we should pause AI. I very much appreciate people standing up, entering the arena and saying what they believe in, including quite often in my comments. Let the others mock all they want.
If you agree with Pause AI that the right move is to pause AI, then you should likely be excited to fund this. If you disagree, you have better options. But I’m happy that they are going for it.
Existential Risk Observatory
Focus: Get the word out and also organize conferences
Leader: Otto Barten
Funding Needed: Low
Confidence Level: Medium
Mostly this is the personal efforts of Otto Barten, ultimately advocating for a conditional pause. For modest amounts of money, he’s managed to have a hand in some high profile existential risk events and get the first x-risk related post into TIME magazine. It seems worthwhile to pay the modest amount to ensure he can keep doing what he is doing, in the way he thinks is best.
Simons Institute for Longterm Governance
Focus: Foundations and demand for international cooperation on AI governance and differential tech development
Leader: Konrad Seifert and Maxime Stauffer
Funding Needed: Medium
Confidence Level: Medium
As with all things diplomacy, hard to tell the difference between a lot of talk and things that are actually useful. Things often look the same either way for a long time. A lot of their focus is on the UN, so update either way based on how useful you think that approach is. They are doing a lot of attempted Global South coordination on this.
Legal Advocacy for Safe Science and Technology
Focus: Legal team for lawsuits on catastrophic risk and to defend whistleblowers.
Leader: Tyler Whitmer
Funding Needed: Medium
Confidence Level: Low
I wasn’t sure where to put them, but I suppose lawsuits are kind of policy by other means in this context, or close enough? I buy the core idea, which is that having a legal team on standby for catastrophic risk related legal action in case things get real quickly is a good idea, and I haven’t heard anyone else propose this, although I do not feel qualified to vet the operation.
While they are open to accepting donations, they’re not yet set up to take a ton of smaller donations (yet). Donors who are interested in making relatively substantial donations or grants should contact info@lasst.org.
Organizations Doing ML Alignment Research
This category should be self-explanatory. Unfortunately, a lot of good alignment work still requires charitable funding. The good news is that there is a lot more funding, and willingness to fund, than there used to be, and also the projects generally look more promising.
The great thing about interpretability is that you can be confident you are dealing with something real. The not as great thing is that this can draw too much attention to interpretability, and that you can fool yourself into thinking that All You Need is Interpretability.
The good news is that several solid places can clearly take large checks.
I didn’t investigate too deeply on top of my existing knowledge here, because at SFF I had limited funds and decided that direct research support wasn’t a high enough priority, partly due to it being sufficiently legible. We should be able to find money on the sidelines eager to take these opportunities.
Model Evaluation and Threat Research (METR)
Formerly ARC Evaluations.
Focus: Model evaluations
Leaders: Beth Barnes, Emma Abele, Chris Painter, Kit Harris
Funding Needed: None Whatsoever
Confidence Level: High
Originally I wrote that we hoped to be able to get large funding for METR via non-traditional sources. That has happened – METR got major funding recently. That’s great news. It also means there is no plausible ‘funding gap’ here for now.
If it ever does need funding again, METR has proven to be the gold standard for outside evaluations of potentially dangerous frontier model capabilities. We very much need these outside evaluations, and to give the labs every reason to use them and no excuse not to use them. In an ideal world the labs would be fully funding METR, but they’re not. So this becomes a place where we can confidently invest quite a bit of capital, make a legible case for why it is a good idea, and know it will probably be well spent.
Alignment Research Center (ARC)
Focus: Theoretically motivated alignment work
Leader: Jacob Hinton
Funding Needed: High
Confidence Level: High
There’s a long track record of good work here, and Paul Christiano remains excited. If you are looking to fund straight up alignment work and don’t have a particular person or small group in mind, this is certainly a safe bet to put additional funds to good use and attract good talent.
Apollo Research
Focus: Evaluations, especially versus deception, some interpretability and governance.
Leader: Marius Hobbhahn
Funding Needed: High
Confidence Level: High
This is an excellent thing to focus on, and one of the places we are most likely to be able to show ‘fire alarms’ that make people sit up and notice. Their first year seems to have gone well, one example would be their presentation at the UK safety summit that LLMs can strategically deceive their primary users when put under pressure. They will need serious funding to fully do the job in front of them, hopefully like METR they can be helped by the task being highly legible.
To donate, reach out to info@apolloresearch.ai.
Cybersecurity Lab at University of Louisville
Focus: Allow Roman Yampolskiy to continue his research and pursue a PhD
Leader: Roman Yampolskiy
Funding Needed: Low
Confidence Level: High
If this still hasn’t happened by the time you read this, and there is danger he won’t be able to do the PhD, then obviously someone should fix that. His podcast on Lex Fridman was a great way to widen the audience, and it is clear he says what he believes and is pursuing what he thinks might actually help. He is the doomiest of doomers, and I’m glad he is not holding back on that, even if I disagree on the assessment and think it’s not the ideal look. Because the ideal is to say what you think.
Timaeus
Focus: Interpretability research
Leader: Jesse Hoogland
Funding Needed: Medium
Confidence Level: High
Timaeus focuses on interpretability work and sharing their results. The set of advisors is excellent, including Davidad and Evan Hubinger. Evan, John Wentworth and Vanessa Kosoy have offered high praise, and there is evidence they have impacted top lab research agendas. They’re done what I think is solid work, although I am not so great at evaluating papers directly. If you’re interested in directly funding interpretability research, that all makes this seem like a slam dunk.
To donate, get in touch with Jesse at jesse@timaeus.co. If this is the sort of work that you’re interested in doing, they also have a discord at http://devinterp.com/discord.
Simplex
Focus: Mechanistic interpretability of how inference breaks down
Leader: Paul Riechers and Adam Shai
Funding Needed: Medium
Confidence Level: High
I am not as high on them as I am on Timaeus, but they have given reliable indicators that they will do good interpretability work. I’d feel comfortable backing them.
Far AI
Focus: Interpretability and other alignment research, incubator, hits based approach
Leader: Adam Gleave
Funding Needed: High
Confidence Level: Medium
Hits based is the right approach to research. I’ve gotten confirmation that they’re doing the real thing here. In an ideal world everyone doing the real thing would get supported. But my verification is secondhand.
Alignment in Complex Systems Research Group
Focus: AI alignment research on hierarchical agents and multi-system interactions
Leader: Jan Kulveit
Funding Needed: Medium
Confidence Level: Medium
I like the focus here on agents and their interactions, and from what I saw I think he is generally thinking well. If one wants to investigate further, he has an AXRP podcast episode, which I haven’t listened to.
To donate, reach out to hello@epistea.org, and note that you are interested in donating to ACS specifically.
Apart Research
Focus: AI safety hackathons
Leaders: Esben Kran, Jason Schreiber
Funding Needed: Medium
Confidence Level: Low
I’m confident in their execution of the idea. My doubt here is on the level of ‘is AI safety something that benefits from hackathons.’ Is this something one can, as it were, hack together usefully? Are the hackathons doing good counterfactual work? Or is this a way to flood the zone with more variations on the same ideas? As with many orgs on the list, this one makes sense if and only if you buy the business model.
Transluce
Focus: Interpretability, tools for AI control, and so forth. New org.
Leaders: Jacob Steinhardt, Sarah Schwettmann
Funding Needed: High
Confidence Level: Low
This would be a new organization. I have confirmation the team is credible. The plan is highly ambitious, with planned scale well beyond what SFF could have funded. I haven’t done anything like the investigation into their plans and capabilities you would need before placing a bet that big, as AI research of all kinds gets expensive quickly. If there is sufficient appetite to scale the amount of privately funded direct work of this type, then this seems like a fine place to look.
To donate, reach out to info@transluce.org.
Atlas Computing
Focus: Guaranteed safe AI
Leaders: Evan Miyazono
Funding Needed: Medium
Confidence Level: Low
My hesitancy here is my hesitancy regarding the technical approach. I still can’t see how the guaranteed safe AI plan can work. I’m all for trying it, it is clearly something people should try given how many very smart people find promise in it. I sure hope I’m wrong and the approach is viable. If you find it promising, this looks much better.
They receive donations from here, or you can email them at hello@atlascomputing.org.
Organizations Doing Math, Decision Theory and Agent Foundations
Right now it looks likely that AGI will be based around large language models (LLMs). That doesn’t mean this is inevitable. I would like our chances better if we could base our ultimate AIs around a different architecture, one that was more compatible with being able to get it to do what we would like it to do.
One path for this is agent foundations, which involves solving math to make the programs work instead of relying on inscrutable giant matrices.
Even if we do not manage that, decision theory and game theory are potentially important for navigating the critical period in front of us, for life in general, and for figuring out what the post-transformation AI world might look like, and thus what choice we make now might do to impact that.
There are not that many people working on these problems. Actual Progress would be super valuable. So even if we expect the median outcome does not involve enough progress to matter, I think it’s still worth taking a shot.
The flip side is you worry about people ‘doing decision theory into the void’ where no one reads their papers or changes their actions. That’s a real issue. As is the increased urgency of other options. Still, I think these efforts are worth supporting, in general.
Orthogonal
Focus: AI alignment via agent foundations
Leaders: Tamsin Leake
Funding Needed: Medium
Confidence Level: High
I have funded Orthogonal in the past. They are definitely doing the kind of work that, if it succeeded, might actually amount to something, and would help us get through this to a future world we care about. It’s a long shot, but a long shot worth trying. My sources are not as enthusiastic as they once were, but there are only a handful of groups trying that have any chance at all, and this still seems like one of them.
Topos Institute
Focus: Math for AI alignment
Leaders: Brendan Fong and David Spivak.
Funding Needed: Medium
Confidence Level: High
Topos is essentially Doing Math to try and figure out what to do about AI and AI Alignment. I’m very confident that they are qualified to (and actually will) turn donated money (partly via coffee) into math, in ways that might help a lot. I am also confident that the world should allow them to attempt this.
Ultimately it all likely amounts to nothing, but the upside potential is high and the downside seems very low. I’ve helped fund them in the past and am happy about that.
To donate, go here.
Eisenstat Research
Focus: Two people doing research at MIRI, in particular Sam Eisenstat
Leader: Sam Eisenstat
Funding Needed: Medium
Confidence Level: High
Given Sam Eisenstat’s previous work it seems worth continuing to support him, including an additional researcher of his choice. I still believe in this stuff being worth working on, obviously only support if you do as well.
To donate, contact sam@intelligence.org.
ALTER (Affiliate Learning-Theoretic Employment and Resources) Project
Focus: This research agenda, with this status update, examining intelligence
Leader: Vanessa Kosoy
Funding Needed: Medium
Confidence Level: High
This is Vanessa Kosoy and Alex Appel, who have another research agenda formerly funded by MIRI that now needs to stand on its own after their refocus. I once again believe this work to be worth continuing even if the progress isn’t what one might hope. I wish I had the kind of time it takes to actually dive into these sorts of theoretical questions, but alas I do not, or at least I’ve made a triage decision not to.
Mathematical Metaphysics Institute
(Link goes to a Google doc with more information, no website yet.)
Focus: Searching for a mathematical basis for metaethics.
Leader: Alex Zhu
Funding Needed: Small
Confidence Level: Low
Alex Zhu has run iterations of the Math & Metaphysics Symposia, which had some excellent people in attendance, and intends partly to do more things of that nature. He thinks eastern philosophy contains much wisdom relevant to developing a future ‘decision-theoretic basis of metaethics’ and plans on an 8+ year project to do that.
I’ve seen plenty of signs that the whole thing is rather bonkers, but also strong endorsements from a bunch of people I trust that there is good stuff here, and the kind of crazy that is sometimes crazy enough to work. So there’s a lot of upside. If you think this is kind of approach has a chance of working, this could be very exciting.
To donate, message Alex at zhukeepa@gmail.com.
Focal at CMU
Focus: Game theory for cooperation by autonomous AI agents
Leader: Vincent Conitzer
Funding Needed: Medium
Confidence Level: Low
This is an area MIRI and the old rationalist crowd thought about a lot back in the day. There are a lot of ways for advanced intelligences to cooperate that are not available to humans, especially if they are capable of doing things in the class of sharing source code or can show their decisions are correlated with each other. With sufficient capability, any group of agents should be able to act as if it is a single agent, and we shouldn’t need to do the game theory for them in advance either. I think it’s good things to be considering, but one should worry that even if they do find answers it will be ‘into the void’ and not accomplish anything. Based on my technical analysis I wasn’t convinced Focal was going to sufficiently interesting places with it, but I’m not at all confident in that assessment.
To donate, reach out to Vincent directly at conitzer@cs.cmu.edu to be guided through the donation process.
Organizations Doing Cool Other Stuff Including Tech
This section is the most fun. You get unique projects taking big swings.
MSEP Project at Science and Technology Futures (Their Website)
Focus: Drexlerian Nanotechnology
Leaders: Eric Drexler, of course
Funding Needed; Medium
Confidence Level: High
Yes, it’s Eric Drexler looking for funding for better understanding nanotechnology, including by illustrating it via games. This seems like a clear case of ‘shut up and take my money.’ The catch is that he wants to open source the tech, and there are some obvious reasons why open sourcing nanotechnology might not be a wonderful idea? This is another case of it being fine for now, and perhaps there being a time in the future when it would need to stop, which should be obvious. Given that it should be obvious and how brilliant Drexler is and how much we need to get lucky somewhere, I’m very willing to gamble.
To donate, reach out to info@scienceandtechnologyfutures.org.
ALLFED
Focus: Feeding people with resilient foods after a potential nuclear war
Leaders: David Denkenberger
Funding Needed: High
Confidence Level: Medium
As far as I know, no one else is doing the work ALLFED is doing. A resilient food supply ready to go in the wake of a nuclear war could be everything. There’s a small but real chance that the impact is enormous. In my 2021 SFF round, I went back and forth with them several times over various issues, ultimately funding them, you can read about those details here.
I think all of the concerns and unknowns from last time essentially still hold, as does the upside case. I decided I wasn’t going to learn more without a major time investment, and that I didn’t have the ability to do that investment.
If you are convinced by the viability of the tech and ability to execute, then there’s a strong case that this is a very good use of funds, especially if you are an ‘AI skeptic’ and also if your model of AI political dynamics includes a large chance of nuclear war.
Research and investigation on the technical details seems valuable here. If we do have a viable path to alternative foods and don’t fund it, that’s a pretty large miss, and I find it highly plausible that this could be super doable and yet not otherwise done.
Good Ancestor Foundation
Focus: Collaborations for tools to increase civilizational robustness to catastrophes
Leader: Colby Thompson
Funding Needed: High
Confident Level: High
The principle of ‘a little preparation now can make a huge difference to resilience and robustness in a disaster later, so it’s worth doing even if the disaster is not so likely’ generalizes. Thus, the Good Ancestor Foundation, targeting nuclear war, solar flares, internet and cyber outages, and some AI scenarios and safety work.
A particular focus is archiving data and tools, enhancing synchronization systems and designing a novel emergency satellite system (first one goes up in June) to help with coordination in the face of disasters. They’re also coordinating on hardening critical infrastructure and addressing geopolitical and human rights concerns. They’ve also given out millions in regrants.
One way I know they make good decisions is they help facilitate the funding for my work. They have my sincerest thanks. Which also means there is a conflict of interest, so take that into account.
Charter Cities Institute
Focus: Building charter cities
Leader: Kurtis Lockhart
Funding Needed: Medium
Confidence Level: Medium
I do love charter cities. There is little question they are attempting to do a very good thing and are sincerely going to attempt to build a charter city in Africa, where such things are badly needed. Very much another case of it being great that someone is attempting to do this. Seems like a great place for people who don’t think transformational AI is on its way but do understand the value here.
German Primate Center (DPZ) – Leibniz Institute for Primate Research
Focus (of this proposal was): Creating primates from cultured edited stem cells
Leaders: Sergiy Velychko and Rudiger Behr
Funding Needed: High
Confidence Level: Low
The Primate Center is much bigger than any one project, but this project was intriguing – if you are donating because of this project, you’ll want to make sure the money goes for this specific project. The theory says that you should be able to create an embryo directly from stem cells, including from any combination of genders, with the possibility of editing their genes. If it worked, this could be used for infertility, allowing any couple to have a child, and potentially the selection involved could be used for everything from improved health to intelligence enhancement. The proposed project in particular is to do this in primates.
I can’t speak directly to verify the science, and there are those who think any existential risk considerations probably arrive too late to matter, and of course you will want to consider any potential ethical concerns, but if you see substantial chance it works and think of this purely as the ultimate infertility treatment, that is already amazing value.
Carbon Copies for Independent Minds
Focus: Whole brain emulation
Leader: Randal Koene
Funding Needed: Medium
Confidence Level: Low
At this point, if it worked in time to matter, I would be willing to roll the dice on emulations. What I don’t have is much belief that it will work, or the time to do a detailed investigation into the science. So flagging here, because if you look into the science and you think there is a decent chance, this becomes a good thing to fund.
Organizations Focused Primarily on Bio Risk
Secure DNA
Focus: Scanning DNA synthesis for potential hazards
Leader: Kevin Esvelt, Andrew Yao and Raphael Egger
Funding Needed: Medium
Confidence Level: Medium
It is certainly an excellent idea. Give everyone fast, free, cryptographically screening of potential DNA synthesis to ensure no one is trying to create something we do not want anyone to create. AI only makes this concern more urgent. I didn’t have time to investigate and confirm this is the real deal as I had other priorities even if it was, but certainly someone should be doing this.
There is also another related effort, Secure Bio, if you want to go all out. I would fund Secure DNA first.
To donate, contact them here.
Blueprint Biosecurity
Focus: Increasing capability to respond to future pandemics, Next-gen PPE, Far-UVC.
Leader: Jake Swett
Funding Needed: Medium
Confidence Level: Medium
There is no question we should be spending vastly more on pandemic preparedness, including far more on developing and stockpiling superior PPE and in Far-UVC. It is rather a shameful that we are not doing that, and Blueprint Biosecurity plausibly can move substantial additional investment there. I’m definitely all for that.
To donate, reach out to donations@blueprintbiosecurity.org or head to the Blueprint Bio PayPal Giving Fund.
Pour Domain
Focus: AI enabled biorisks, among other things.
Leader: Patrick Stadler
Funding Needed: Low
Confidence Level: Low
Everything individually looks worthwhile but also rather scattershot. Then again, who am I to complain about a campaign for e.g. improved air quality? My worry is still that this is a small operation trying to do far too much, some of it that I wouldn’t rank too high as a priority, and it needs more focus, on top of not having that clear big win yet.
Donation details are at the very bottom of this page.
Organizations That then Regrant to Fund Other Organizations
There were lots of great opportunities in SFF this round. I was going to have an embarrassment of riches I was excited to fund.
Thus I decided quickly that I would not be funding any regrating organizations. If you were in the business of taking in money and then shipping it out to worthy causes, well, I could ship directly to highly worthy causes, so there was no need to have someone else do the job again, or reason to expect them to do better.
That does not mean that others should not consider such donations.
I see two important advantages to this path.
Regranters can offer smaller grants that are well-targeted.
Regranters save you a lot of time.
Thus, if you are making a ‘low effort’ donation, and think others you trust that share your values to invest more effort, it makes more sense to consider regranters.
SFF Itself (!)
Focus: Give out grants based on recommenders, primarily to 501c(3) organizations
Leaders: Andrew Critch and Jaan Tallinn
Funding Needed: High
Confidence Level: High
If I had to choose a regranter right now to get a large amount of funding, my pick would be to give it to the SFF process. The applicants and recommenders are already putting in their effort, and it is very clear there are plenty of exciting places to put additional funds. The downside is that SFF can’t ‘go small’ efficiently on either end, so it isn’t good at getting small amounts of funding to individuals. If you’re looking to do that in particular, and can’t do it directly, you’ll need to look at other options.
Due to their operational scale, SFF is best suited only for larger donations.
Manifund
Focus: Regranters to AI safety, existential risk, EA meta projects, creative mechanisms
Leader: Austin Chen (austin at manifund.org).
Funding Needed: Medium
Confidence Level: Medium
This is a regranter that gives its money to its own regranters, one of which was me, for unrestricted grants. They’re the charity donation offshoot of Manifold. They’ve played with crowdfunding, and with impact certificates, and ACX grants. They help run Manifest.
You’re essentially hiring these people to keep building a website and trying alternative funding allocation mechanisms, and for them to trust the judgment of selected regranters. That seems like a reasonable thing to do if you don’t otherwise know where to put your funds and want to fall back on a wisdom of crowds of sorts. Or, perhaps, if you actively want to fund the cool website.
Manifold itself did not apply, but I would think that would also be a good place to invest or donate in order to improve the world. It wouldn’t even be crazy to go around subsidizing various markets. If you send me manna there, I will set aside and use that manna to subsidize markets when it seems like the place to do that.
If you want to support Manifold itself, you can donate or buy a SAFE, contact Austin.
Also I’m a regranter at Manifund, so if you wanted to, you could use that to entrust me with funds to regrant. As you can see I certainly feel I have plenty of good options here if I can’t find a better local one, and if it’s a substantial amount I’m open to general directions (e.g. ensuring it happens relatively quickly, or a particular cause area as long as I think it’s net positive, or the method of action or theory of impact).
AI Risk Mitigation Fund
Focus: Spinoff of LTFF, grants for AI safety projects
Leader: Thomas Larsen
Funding Needed: Medium
Confidence Level: Medium
Seems very straightforwardly exactly what it is, a standard granter, usually in the low six figure range. Fellow recommenders were high on Larsen’s ability to judge projects. If you think this is better than you can do on your own and you want to fund such projects, then sure, go for it.
Long Term Future Fund
Focus: Grants of 4-6 figures mostly to individuals, mostly for AI existential risk
Leader: Caleb Parikh (among other fund managers)
Funding Needed: High
Confidence Level: Low
The pitch on LTFF is that it is a place for existential risk people who need modest cash infusions to ask for them, and to get them without too much overhead or distortion. Looking over the list of grants, there is at least a decent hit rate. One question is, are the marginal grants a lot less effective than the average grant?
My worry is that I don’t know the extent to which the process is accurate, fair, favors insiders or extracts a time or psychic tax on participants, or rewards ‘being in the EA ecosystem’ or especially the extent to which the net effects are distortionary and bias towards legibility and standardized efforts. Or the extent to which people use the system to extract funds without actually doing anything.
That’s not a ‘I think the situation is bad,’ it is a true ‘I do not know.’ I doubt they know either.
What do we know? They say applications should take 1-2 hours to write and between 10 minutes and 10 hours to evaluate, although that does not include time forming the plan, and this is anticipated to be an ~yearly process long term. And I don’t love that this concern is not listed under reasons not to choose to donate to the fund (although the existence of that list at all is most welcome, and the reasons to donate don’t consider the flip side either).
Foresight
Focus: Regrants, fellowships and events
Leader: Allison Duettmann
Funding Needed: Medium
Confidence Level: Low
Foresight also does other things. The focus here was their AI existential risk grants, which they offer on a rolling basis. I’ve advised them on a small number of potential grants, but they haven’t asked often as of yet. The advantage on the regrant side would be to get outreach that wasn’t locked too tightly into the standard ecosystem. The other Foresight activities all seem clearly like good things, but the bar these days is high and since they weren’t the topic of the application I didn’t investigate. They’ve invited me to an event, but I haven’t been able to find time to go.
Centre for Enabling Effective Altruism Learning & Research (CEELAR)
Focus: The Athena Hotel aka The EA Hotel as catered host for EAs in UK
Leader: Greg Colbourn
Funding Needed: Medium
Confidence Level: Low
I love the concept of a ‘catered hotel’ where select people can go to be supported in whatever efforts seem worthwhile. If you are looking to support a very strongly EA-branded version of that, which I admit that I am not, then here you go.
Organizations That are Essentially Talent Funnels
I am widely skeptical of prioritizing AI safety talent funnels at this time.
The reason is simple. If we have so many good organizations already, in need of so much funding, why do we need more talent funnels? Is talent our limiting factor? Are we actually in danger of losing important talent?
The situation was very different last time. We had more funding than I felt we had excellent places to put it. Indeed, I solicited and then gave a grant to Emergent Ventures India. That’s a great way to promote development of talent in general, the grants are very small and have large impacts, and Tyler Cowen is an excellent evaluator and encourager of and magnet for talent.
Now I look at all the organizations here, and I don’t see a shortage of good talent. If anything, I see a shortage of ability to put that talent to good use.
The exception is leadership and management. There remains, it appears, a clear shortage of leadership and management talent across all charitable space, and startup space, and probably flat out all of space.
Which means if you are considering stepping up and doing leadership and management, then that is likely more impactful than you might at first think.
If there was a strong talent funnel specifically for leadership or management, that would be a very interesting funding opportunity. And yes, of course there still need to be some talent funnels. Right now, my guess is we have enough, and marginal effort is best spent elsewhere.
But also high returns from developing good talent are common, so disagreement here is reasonable. This is especially true if people can be placed ‘outside the ecosystem’ where they won’t have to compete with all the usual suspects for their future funding. If you can place them into government, that’s even better. To the extent that is true, it makes me more excited.
AI Safety Camp
Focus: Learning by doing, participants work on a concrete project in the field
Leaders: Remmelt Ellen and Linda Linsefors
Funding Needed: Low
Confidence Level: High
By all accounts they are the gold standard for this type of thing. Everyone says they are great, I am generally a fan of the format, I buy that this can punch way above its weight or cost. If I was going to back something in this section, I’d start here.
Donors can reach out to Remmelt at remmelt@aisafety.camp, or leave a donation at Manifund to help cover stipends.
Center for Law and AI Risk
Focus: Paying academics small stipends to move into AI safety work
Leaders: Peter Salib (psalib @ central.uh.edu), Yonathan Arbel (yarbel @ law.ua.edu) and Kevin Frazier (kfrazier2 @ stu.edu).
Funding Needed: Low
Confidence Level: High
This strategy is potentially super efficient. You have an academic that is mostly funded anyway, and they respond to remarkably small incentives to do something they are already curious about doing. Then maybe they keep going, again with academic funding. If you’re going to do ‘field building’ and talent funnel in a world short on funds for those people, this is doubly efficient. I like it.
To donate, message one of leaders at the emails listed above.
Speculative Technologies
Focus: Fellowships for Drexlerian functional nanomachines, high-throughput tools and discovering new superconductors
Leader: Benjamin Reinhardt
Funding Needed: Medium
Confidence Level: Medium
My note to myself of ‘is it unfair to say we should first fund literal Eric Drexler?’, who is also seeking funding, is indeed a tad unfair, also illustrates how tough it is out there looking for funding. I have confirmation that Reinhardt knows his stuff, and we certainly could use more people attempting to build revolutionary hardware. If the AI is scary enough to make you not want to build the hardware, it would figure out how to build the hardware anyway, so you might as well find out now.
So if you’re looking to fund a talent funnel, this seems like a good choice.
To donate, go here.
Talos Network
Focus: Fellowships to other organizations, such as Future Society, Safer AI and FLI.
Leader: Cillian Crosson (same as Tarbell for now but she plans to focus on Tarbell)
Funding Needed: Medium
Confidence Level: Medium
They run two fellowship cohorts a year. They seem to place people into a variety of solid organizations, and are exploring the ability to get people into various international organizations like the OECD, UN or European Commission or EU AI Office. The more I am convinced people will actually get inside meaningful government posts, the more excited I will be.
To donate, contact team@talosnetwork.org.
MATS Research
Focus: Researcher mentorship for those new to AI safety.
Leaders: Ryan Kidd and Christian Smith.
Funding Needed: High
Confidence Level: Medium
MATS is by all accounts very good at what they do and they have good positive spillover effects on the surrounding ecosystem. If (and only if) you think that what they do, which is support would-be alignment researchers starting out, is what you want to fund, then you should absolutely fund them. That’s a question of prioritization.
Epistea
Focus: X-risk residencies, workshops, coworking in Prague, fiscal sponsorships
Leader: Irena Kotikova
Funding Needed: Medium
Confidence Level: Medium
I see essentially two distinct things here.
First, you have the umbrella organization, offering fiscal sponsorship for other organizations. Based on what I know from the charity space, this is a highly valuable service – it was very annoying getting Balsa a fiscal sponsor, even though we ultimately found a very good one that did us a solid, and also annoying figuring out how to be on our own going forward.
Second, you have various projects around Prague, which seem like solid offerings in that class of action of building up EA-style x-risk actions in the area, if that is what you are looking for. So you’d be supporting some mix of those two things.
To donate, contact hello@epistea.org.
Emergent Ventures (Special Bonus Organization, was not part of SFF)
Focus: Small grants to individuals to help them develop their talent
Leader: Tyler Cowen
Funding Needed: Medium
Confidence Level: High
I’m listing this at the end of the section as a bonus entry. They are not like the other talent funnels in several important ways.
It’s not about AI Safety. You can definitely apply for an AI Safety purpose, he’s granted such applications in the past, but topics run across the board, well beyond the range otherwise described in this post.
Decisions are quick and don’t require paperwork or looking legible. Tyler Cowen makes the decision, and there’s no reason to spend much time on your end either.
There isn’t a particular cause area this is trying to advance, and he’s not trying to steer people to do any particular thing. Just to be more ambitious, and be able to get off the ground and build connections and so on. It’s not prescriptive.
I strongly believe this is an excellent way to boost the development of more talent, as long as money is serving as a limiting factor on the project, and that it is great to develop talent even if you don’t get to direct or know where it is heading. Sure, I get into rhetorical arguments with Tyler all the time, around AI and also other things, and we disagree strongly about some of the most important questions where I don’t understand how he can continue to have the views he does, but this here is still a great project, an amazingly cost-efficient intervention.
AI Safety Cape Town
Focus: AI safety community building and research in South Africa
Leaders: Leo Hyams and Benjamin Sturgeon
Funding Needed: Low
Confidence Level: Low
This is a mix of AI research and building up the local AI safety community. One person whose opinion I value gave the plan and those involved in it a strong endorsement, so including it based on that.
To donate, reach out to leo@aisafetyct.com.
Impact Academy Limited
Focus: Incubation, fellowship and training in India for technical AI safety
Leader: Sebastian Schmidt
Funding Needed: Medium
Confidence Level: Low
I buy the core idea that India is a place to get good leverage on a lot of underserved talent, that is not going to otherwise get exposure to AI safety ideas and potentially not get other good opportunities either, all on the cheap. So this makes a lot of sense.
To donate, contact info@impactacademy.org.
Principles of Intelligent Behavior in Biological and Social Systems (PIBBSS)
Focus: Fellowships and affiliate programs for new alignment researchers
Leader: Nora Ammann, Lucas Teixeira and Dusan D. Nesic
Funding Needed: High
Confidence Level: Low
The same logic applies here as applies to the other talent funnels. It seems like a solid talent funnel when I look at who they sent through it, and one other recommender thought their approach was strong, but do we need more of this? If you think we straightforwardly need more help with people starting out in alignment work, then this is a solid place to look.
To donate, reach out to contact@pibbss.ai.
Tarbell Fellowship at PPF
Focus: Journalism fellowships for oversight of AI companies.
Leader: Cillian Crosson (same as Talos Network for now but she plans to focus here)
Funding Needed: Medium
Confidence Level: Low
They offer fellowships to would-be journalists so they can out and provide ‘democratic oversight of AI.’ They have sponsored at least one person who went on to do good work in the area.
I am not sure this is a place we need to do more investment, or if people trying to do this even need fellowships. Hard to say. There’s certainly a lot more tech reporting and more every day, if I’m ever short of material I have no trouble finding more. It is still a small amount of money per person that can meaningfully help people get on their feet and do something useful. We do in general need better journalism.
Catalyze Impact
Focus: Incubation of AI safety organizations
Leader: Alexandra Bos
Funding Needed: Low
Confidence Level: Low
Why funnel individual talent when you can incubate entire organizations? I am not convinced that on the margin we currently need more of either, but I’m more receptive to the idea of an incubator. Certainly incubators can be high leverage points for getting valuable new orgs and companies off the ground, especially if your model is that once the org becomes fundable it can unlock additional funding. And the price is right, so this could be worth a shot even if we’re somewhat saturated on orgs already, to try and get better ones. If you think an incubator is worth funding, then the question is whether this is the right team. The application was solid all around, but beyond that I don’t have a differentiator on why this is the team.
Akrose
Focus: Various field building activities in AI safety
Leader: Victoria Brook
Funding Needed: Medium
Confidence Level: Low
Kind of an 80,000 hours variant. They also help find funding and compute and make connections, and offer 30 minute phone calls. Their job board seems seems like a useful thing and passes at least some sanity checks on what not to list, I’ve referenced it before in the newsletter. Highly plausible choice if that fits your investment thesis.
To donate, reach out to team@arkose.org.
CeSIA within EffiSciences
Focus: New AI safety org in Paris, discourse, R&D collaborations, talent pipeline
Leaders: Charbel-Raphael Segerie, Florent Berthet
Funding Needed: Medium
Confidence Level: Low
They’re doing all three of discourse, direct work and talent funnels. I put them in the talent section based on my read of where they have biggest emphasis and best case for existing impact. I see enough social proof of them doing the things that I’m happy to list them, in case people are excited to back a new org of this type.
To donate, go here.
Stanford Existential Risk Initiative (SERI)
Focus: Recruitment for existential risk causes
Leader: Steve Luby and Paul Edwards
Funding Needed: Medium
Confidence Level: Low
Stanford students certainly are one place to find people worth educating about existential risk. It’s also an expensive place to be doing it, and a place that shouldn’t need extra funding. And that hates fun. And it’s not great that AI is listed third on their existential risk definition. So I’m not high on them, but it sure beats giving unrestricted funds to your Alma Mater.
Interested donors should contact Steve Luby directly at sluby@stanford.edu.
And that’s a wrap!
If an organization was not included here, again, that does not mean they aren’t good, or even that I wouldn’t endorse them if asked. It could be because they didn’t apply to SFF, or because I didn’t give them the time and attention they need, or in several cases because I wrote up a section for them but they asked to be excluded – if by accident I included you and you didn’t want to be included and I failed to remove you, or you don’t like the quote here, I sincerely apologize and will edit you out right away, no questions asked.
If an organization is included here, that is a good thing, but again, it does not mean you should donate without checking if it makes sense based on what you think is true, how you think the world works, what you value and what your priorities are. There are no universal right answers.
Thanks a lot for the post! It’s really useful to have so many charities and a bit of context in the same place when thinking about my own donations. I found it hard to navigate a post with so many charities, so I put this into a spreadsheet that lets me sort and filter the categories—hopefully this is useful to others too! https://docs.google.com/spreadsheets/d/1WN3uaQYJefV4STPvhXautFy_cllqRENFHJ0Voll5RWA/edit?gid=0#gid=0
Note that if you want to investigate further but would rather read a transcript than watch a video, AXRP has you covered.
Tarbell Fellowship at PPF
I think you’ve massively underrated this. My impression is that Tarbell has had significant effect on the general AI discourse, by allowing a number of articles to be written in mainstream outlets.
Did you look into: https://longtermrisk.org/?
Great post! One correction:
AI Safety Info is a project owned by Rob Miles which mostly works on an extensive FAQ (300+ articles, covering how to help, common objections and responses, introduction, and more in-depth material+resources like the memes wiki), as well as some side projects like maintaining the Alignment Research Dataset and the RAG chatbot. While AI Safety Info is exploring producing videos, Rob’s videos are not under the heading of AI Safety Info.