Now that applications for the next round are open, I want to strongly encourage more charities whose work plausibly benefits the long term future, especially charities that do not interact much with the rationalist or effective altruist ecosystems and networks, to apply for funding. SFF has no official connection with the effective altruism movement (EA), aside from sometimes recommending grants to support EA-branded charities; and, many involved are actively seeking opportunities to support projects not connected with the EA social network.
You’ll need a 501(c) charity in order to apply. If you don’t have a 501(c) but would otherwise consider applying, it’s too late for this round but there will be others, so consider getting a 501(c).
The application is not as quick and easy as it could be (I’m hoping to get that to change) but it’s not bad. Filling out such applications can be daunting, and requires much gumption. But the expected value of doing so, if you have any kind of plausible case, is very high. Your odds are remarkably good. There is a lot of money being distributed here, and a lot of money being distributed elsewhere, and a shortage of known worthy causes to which to distribute the money.
You could even get immediate gratification in the form of a speculation grant. A number of people, including myself and also Scott Aaronson, have been given $200k budgets to give out as we see fit. He’s inviting quick pitches for this. If you want to pitch me, the comments here are the place to do it, and I’ll give a quick estimate of how likely I would be to give the organization a speculation grant if they apply. My bar for saying yes if I don’t already know you will be high, but not infinitely high. Try to do this within the week.
This post is an attempt to help charities decide whether applying is worthwhile. Not every cause will be a good fit.
It also aims to tell them how, in my model, to ensure that if they should be funded, they get funded.
And most importantly, it should help you not stress so much and not spend as much time on the application, so it happens at all.
General disclaimer: This is me talking, and I have zero official capacity, and those who make the decisions would likely disagree with some of this, and some of this is probably lousy advice if it were for some bizarre reason taken as advice, and (to cover every possible legal base) this is not any kind of advice.
Jaan’s Philanthropic Priorities
Jaan Tallinn is currently the primary funder of the S-process. He (by design) doesn’t have any direct say in the allocation, but the process reflects his priorities, which you can read here.
This is the headline information.
The primary purpose of my philanthropy is to reduce existential risks to humanity from advanced technologies, such as AI. I believe that this cause is (as per the ITN framework):
important: existential risks are fundamentally important almost by definition — as far as I know, all other philanthropic causes assume continued existence of humanity as their prerequisite;
tractable: since scientific and technological progress is (for better or worse) concentrated in relatively few institutions, I believe it is more tractable than more “diffuse” problems, such as global warming. furthermore, having worked on this topic since 2009, I feel I have a comparative advantage in identifying productive initiatives in this field;
neglected: despite more than a decade of robust growth, still a disproportionately little amount of resources go to reducing existential risks.
therefore, I’m especially likely to support initiatives that (ideally explicitly, but see below) address existential risks. conversely, I’m likely to pass on all other opportunities — especially popular ones, like supporting education, healthcare, arts, and various social causes. Importantly, this is not a vote against such causes, many of which are very important! It’s just that they can be hard for me to support without reducing my attention on existential risks.
Do not let this information overly discourage you. The s-process does care more about things that are existential risks to humanity, as opposed to things that are not, but that’s because they tend to matter more to our long term future. If we’re all dead we don’t have much of a long term future.
The goal is long term survival and flourishing (hence the name, ‘survival and flourishing fund.’)
Each recommender will have a different model of what is likely to make that happen.
This does mean the fund mostly ignores short term flourishing, except as it influences the long term. There isn’t enough potential marginal flourishing in the near term for its own sake. Short term survival matters if it is required for long term survival, hence the focus on existential risks.
How and how much do short term survival and flourishing, of various types, and other short term effects, impact the long term? That’s a very interesting question where opinions often differ.
I encourage anyone who is curious to read my previous report. Note that my sense of ‘what matters for the long term’ differs from that of others, and each round will have different recommenders. That’s all the more reason to take a chance on a non-obvious theory of long term impact in terms of applying at all, including multiple times when you fail, but an argument against counting on such an argument working.
The Logic Behind Funding
Remember that it only takes one recommender to get funded. It’s about getting someone excited, not getting consensus.
People are looking to get excited. Many of us want to find a unique proposal for improving the long term future that we hadn’t considered before, even if it’s indirect or involves a bunch of f***ing around and finding out.
That opens things up quite a lot. Almost anything that isn’t a standard-charity-action and has a plausible impact argument has some chance. Even without me or my particular thinking (the focus on the Doing of the Thing) involved, SFF has in the past funded charter cities, a parliamentary group for future generations, a learning website for basic knowledge, progress studies and systems for general basic scientific research.
Those don’t obviously impact existential risks, so there’s clearly room for other things.
As a toy model, one can divide long-term effects into two broad categories.
Ability to Do various Things (Capabilities).
Existential risk due to the people Doing all these Things (Risks).
We want to increase those capabilities that tend to lead to life getting better for humans, while decreasing risk.
This is tricky, as many capabilities (most importantly AI capability) by default directly increase risk. Yet one of the Things capability lets you Do is decreasing existential risk. Only sometimes (e.g. ‘AI Safety’) is this explicitly labeled as such. There is disagreement about this, but I strongly believe, and I believe that most (but not all) potential recommenders believe, that without sufficient general Capabilities increases, survival and flourishing will be impossible.
Not all capabilities are created equal.
So the question is, which capabilities are you increasing? Are you differentially increasing Capabilities versus increasing Risk, by making progress in ways that make us safer and more generally capable faster than the ways that introduce, amplify and accelerate the risks?
The central example of this question is an ongoing debate as to whether, in this context, accelerating scientific progress is a good or bad idea, or how much additional differentiation is required before it becomes a clearly good idea. Last round, the debate seemed conclusively resolved in favor of basic research being good without any need to focus on existential risk, and worth funding, with the only question being whether the ability was present to turn money into basic research.
Advancement of thinking is similar. More people who better understand the world, or who are better at thinking and figuring things out (in order words, actual education) seems clearly good. The issue is that ‘education’ as a broad field is clearly not neglected, and most funding wouldn’t do much, but targeted efforts, especially software based and those orthogonal to any credentials, are more interesting. If I didn’t think that more learning how to think and more understanding and modeling the world was an important job worth funding, this blog wouldn’t exist.
The other exceptions I found are similar.
Charter cities are potentially another example of low-hanging capabilities fruit. I didn’t fund because the particular effort seemed like catch-up growth rather than an attempt to do something unique, so it fell into the short-term bucket, but a more ambitious project that would have enabled new and unique effective freedom of action, would have gotten me to fund, and one other recommender in the past has gotten to a yes.
It surprised me to find that a parliamentary group got funded, but there’s no question that governmental dysfunction, both in the context of AI risk and in other contexts, is a problem worth addressing. The problem is that proposals tend to be unconvincing and adversarial, which are tough bars to clear in a context like SFF. Game theory and decision theory are places where improvements are clearly valued and proposals often funded, so this might not be that big a stretch.
The most obvious and important exception to capabilities defaulting to good is AI. In the realm of AI, capabilities advancement by default creates risk. Existential risks from AI are the most important cause area, but also the one where one must use the most caution, because default actions in this space tend to effectively be ‘increase capability’ and end up increasing and accelerating risk. Thus, anything one does surrounding AI has to consider its impact on the timeline for AI capabilities development, which is much easier to accelerate than it is to slow down.
There are also other places (such as weaponry) where capabilities default to not being a good idea, but collectively they are the exception.
Meta things can definitely get funded.
This can include regrants to places too local and/or small for SFF. Long-Term Future Fund got a large grant last time as did the EA Infrastructure Fund. Some explicit focus on x-risk definitely helps your cause, but isn’t obviously required. As a forward-looking example, if Scott Alexander applied for more money for ACX grants, I don’t agree with all of his choices, but overall I was impressed, and that would have crossed my ‘worth funding’ threshold.
This can also extend to attempts to build up the intellectual and community infrastructures that are seen as enabling of existential-risk prevention or are otherwise seen as valuable. LessWrong has gotten support in the past, and got support in the last round as the more ambitious Lightcone Infrastructure to give logistical support and coordination to those working on existential risk rather than working on it themselves directly.
List of Things That Definitely Might Get Funded, If Your Proposal Is Good Enough
There’s never any guarantees here, but a history of a similar proposal being accepted is good, as is knowing some of the thinking taking place by at least one recommender. I’ll start with the obvious x-risks and go from there, and give my quick view on each.
Existential Risk: AI
This is the obvious and most important cause. The key is making people confident you are Helping rather than Not Helping, not that you’re being the most efficient possible. It’s easy to end up doing capabilities research, or to waste time tackling problems too small to matter.
Existential Risk: Biosecurity and Pandemics
There hasn’t been funding for these in past rounds, but I’m confident that’s because there haven’t been good proposals, and in the last round there were no proposals at all. If you have a good proposal here, people will be inclined to listen. Note that the focus of many here is on existential risks and to discount lesser risks, but I expect that to loosen somewhat in light of recent events.
In particular, I’d like to see proposals to stop or slow down Gain of Function research, given what we have recently learned about it.
Existential Risk: Nuclear War
The problem with the biosecurity proposals has been their non-existence, whereas the problem with the nuclear war proposals is that they’ve mostly been terrible. ALLFED’s proposal to ensure survival in the aftermath of a potential war was the clear exception, along with things like dealing with EMP effects. Everything else was essentially ‘raising awareness’ and there’s little appetite for that. If you can find a way to actually help with this, there’s definitely people interested, but everyone already knows to try and avoid launching nukes.
Existential Risk: Asteroid Strikes, Rogue Comets, Solar Flares, Et Al
Consider this a grab bag for ‘things that have some low background probability of happening, that we’re in no position to prevent, and would be quite bad.’ A very low probability of something sufficiently bad is still worth mitigating or preventing, if you can actually do that and the result checks out when one Shuts Up and Multiplies. Do Look Up, but the ability to potentially change the outcome is the reason to fund a looking-up program. A sufficiently good idea here would be most welcome, unclear the height of the bar. Same for other risks I’m not thinking about that are unlikely but real, even if they don’t fall directly into this category.
Existential Risk: Climate Change and Geoengineering
People who take AI Risk seriously tend to be highly dismissive of focusing on climate change. There’s no doubt it’s happening or that we’d prefer to prevent or mitigate it, but it’s the opposite of neglected. Even if you buy for the sake of argument the full-blown ‘we’re literally all going to die’ story, there’s tons of attention and money already going into it. Yet most of it doesn’t seem all that concerned with being efficient or effective at actually solving the problem, and what seem to me like the obvious things-you-would-do-if-you-wanted-to-actually-solve-this mostly aren’t happening. Also I’ve come around to ‘this is sufficiently damaging to everyone’s psyches and sense of a future that it’s worth solving just to deal with that alone.’ So there’s potential room for carefully targeted ideas that are orders of magnitude more actually likely to do something useful but are having trouble getting other support, if you’ve got one of those, and I’m guessing they exist.
Basic Research, especially Science and Mathematics, also Progress Studies
There’s widespread belief among the reference class of potential recommenders that our current academic and research institutions are badly broken, and we need alternatives that can replace them or at least complement them. Advancements in Mathematics seem likely to differentially help AI Safety over AI capability, because they favor things that it is easier to understand and to prove things about over things where that is harder. If the mathematics in question is targeted to be relevant, that can be a big game in expectation.
For science getting basic science back on track and encouraging promising talent to work on it and ensure they have the resources necessary seems clearly good if it can scale properly.
Scaling up (or continuing) existing successful research efforts is one form of low-hanging fruit here, if you have a track record and the thing capable of scaling.
The more you can make the case that the things you’re trying to figure out or enable are helpful to people trying to think like people and understand how things work, and gives person-thinking an advantage against AI-machine-learning-pseudo-thinking, the better.
I’d also note that there’s a decent amount of appetite for a broadly construed version of what Tyler Cowen calls Progress Studies. Knowing how scientific and other progress happens is valuable.
Research Tools and Educational Tools
A number of things in this class have been funded, with the goal of making work acquiring new knowledge more efficient. Our current methods are definitely highly inefficient throughout, and there’s a lot of room for the right not-that-hard-to-make software to make a big difference, potentially involving AI the way Ought is looking to use GPT. There’s potentially huge leverage available, and proven interest.
I continue to worry about the danger of such things differentially enabling and advantaging machine-learning-style-pseudo-thinking in various ways, so if I were evaluating such a project details of that would be important. See my previous post for more details of these questions in the context of Ought.
New Evaluation Methods: Scientific, Technical and Beyond
Current evaluation methods are haphazard and quite poor across the board. If one could figure out something better, that would be very interesting.
Technical and scientific evaluations were the motivating example, but evaluations of all sorts of other things also seem very useful.
Epistemic (and Strategic) Mapping
People believe a lot of strange things across the board. Those who are focusing on rationalist or altruist spaces, and/or on existential risks are not only no exception, they often think and believe things that are even weirder than usual. One problem is that there’s even more of a clash than usual between what feels socially positive, natural, sane or superficially cooperative or enabling of normal navigation of life, and the actual problem space that determines whether goals get accomplished and the physical world ends up in a configuration we prefer. When the things we’re thinking about or working towards seem disconnected from ordinary life, what we believe intellectually and what we believe in our gut can easily diverge, leading to unproductive and contradictory decisions. The problems are often sufficiently abstract that there’s plenty of room to talk oneself into whatever conclusions, timelines and decisions one would unconsciously like to talk oneself into, and to adjust justifications when one’s stated justifications get falsified rather than updating the conclusion. Meanwhile there’s a lot of money running around (often, as I noted in my previous post, Too Much Money) and that too is massively distortionary.
That’s a mouthful of words that add up to a need to a pressing need to discover what people thinking about the problems that matter actually believe and why, how and to what extent they believe it, and getting that into common knowledge.
Similar work on more practical questions and more ordinary people would also be valuable. Our current methods for knowing what people believe are rather terrible, none of (elites, experts, regular people) have any clue what the other two think or even really what their own group thinks, same with ingroup and outgroup, and so on. If there’s a path to working on that, it would be great.
Real World Experiments in Doing Thing
If there’s a lack of worthwhile things to do with the resources available, then exploring to find potential new worthwhile things and how to execute on them, or learning more about how to execute in general, is high value. That seems to be the world we live in. There’s a remarkable lack of knowledge and skill about how to go from ‘we should do X’ to X being done, even for relatively simple values of X, and it is entirely missing for many potential X that could if successful scale up.
Thus, if there’s a potentially good idea and the will to attempt to execute on that idea, even if the direct impact of that idea is short-term, the impact of learning how to execute ideas translates to the long term. The more one can gather data on the effort and commit to reporting back on what happened, the more useful it can be.
The more such an effort can scale or generalize, the more interesting it is. This can be a test of social technology, or of engineering, and ideally there’s big payoff in the thing itself. As an example, investigating more efficient construction methods both physically and potentially socially (centrally of housing but also other things, and including considering functional and tiny houses) and how to do such things cheaply seems increasingly high value to me while having strong concreteness, or potentially box agriculture. Problem doesn’t have to be that central, I funded Euro Biostasis for being essentially in this class.
As an example of another type of such things, I was highly disappointed by the lack of effort by people in my social circles to take non-super-local action regarding Covid-19 over the past two years, as it provided an excellent laboratory for this, both for learning and training and assembling teams and relationships and for demonstrating ability, as well as being a unique opportunity for impact. That window is likely closing rapidly, but other similar ones are doubtless opening.
Community Support
Obviously this can’t simply be support for any community. So this is where outsiders are going to be, well, on the outs. This has to be community support for the communities that in turn support the causes that excite people. That means those who have a lot of focus on long term concerns including existential risk, and on the things that will enable such focus and for such focus to succeed. This includes, but is not limited to, the Effective Altruist and Rationalist movements and communities.
There’s a broad range of degrees of willingness to engage in this sort of activity, and additionally broad range of disagreement over which types of thing are worth doing. There’s a bunch of different forms of potential support.
I for one am, as I discussed in the previous post, strongly opposed to efforts that are focused on helping these communities and movements acquire more power and money.
I am more optimistic about providing logistical support for individual members, either small support in their formative years to get on their feet and pursue high-upside options without locally binding constraints, or for richer logistical support for those doing the impactful work. Potential conflicts of interest, the danger of nepotism and people potentially getting involved for the wrong reasons make this tricky.
I am also optimistic about richer logistical support for those supporting the communities, and for the communities themselves, via the provision of public goods or paying costs of coordination, but would warn it has a poor track record getting funded so far. And of course all the potential conflicts of interest are still present, and I am no doubt biased.
But I believe that physical location is a big game, in terms of helping get the right people to the right places, with co-location with other right people, including both co-location of housing and also having places to work, play, gather and naturally interact.
Other High Leverage Stuff
This is not meant to be a complete list even of the things that are well-positioned, and it’s definitely not a complete list of the things that are worth doing or worth funding in the world. If there’s an approach with sufficiently high expected impact and leverage, people stand ready to listen, and even if you don’t get major funding it will at least make smart people think about your approach, maybe come up with some ideas or even do something themselves, and hopefully at least give you some useful feedback. There’s a broad sense that bold new different things would be good.
If you’re considering something political, it will need to be a ‘pull the rope sideways’ approach to pursue a policy, or help for those pursuing influence on policy in a long term sphere. Helping the ingroup best the outgroup is right out, as is either side of every hot-button political issue I can think of right now.
Help for the developing world is a long shot, since that’s not explicitly not what the fund is looking to do and is not neglected by similar funding sources elsewhere, but it is interesting if a case can be made that it differentially helps with the causes listed in this section, as discussed in the logic section above.
My Particular Priorities
A major part of my model is that we don’t have enough People Doing Thing, for meaningful values of any of People, Doing and Thing. If we had more People Doing Thing, that would be great, and we need a cultural shift towards the belief that Person who thinks there’s an important Thing to Do should by default go and Do that Thing. That’s not a central viewpoint, but similar views are held by several others who are potential recommenders, so there’s a good chance someone with at least some of this view is in the round.
I’m a big fan of taking people and organizations capable of Doing real action Thing and interested in Doing real actual Thing and enabling them to Do real actual Thing, and for freeing them from constraints that would prevent this.
I’d consider that area my ‘comparative advantage’ in terms of what I’m likely to give a speculation grant to versus others who give such grants. It will still take a strong case to get me interested, but it can be done and has been done. Unfortunately, my bar for a speculation grant has to be higher than for funding in the round, if I don’t expect others to agree, due to the way it is structured.
I’m very interested in existential risk from AI but so are half the others with speculation grants, but one note is that I’m relatively excited by non-ML approaches relative to most others.
I’m interested in community support that involves physical locations and coordination, but again I am skeptical of ability to get funding and thus it would be tough to justify the grant (since it will likely get evaluated poorly).
SFF Application
Let’s take a look at the application including its long form to see what’s involved in trying. A lot of it is basic information that is presumably already at your fingertips (for example, the organization’s name) so I’ll skip that stuff.
One piece of advice up front is to think about the links you’re providing, and what recommenders are likely to see when they’re looking at you and asking what you’re about. Ensure they can get the information you need them to get.
First there’s the ‘non-strategic’ stuff that doesn’t involve making any decisions.
There’s the whole thing about the receiving charity and showing that you can logistically accept the funds. I don’t know how annoying that is on the charity’s end, but it is what it is. Make sure it’s handled.
They ask for compensation information. It is traditional for funders to want to keep salaries down and be suspicious of higher ones, but I no longer believe this is the case. If something is worth funding, it’s worth paying the people involved enough to have a decent standard of living, and to not ask them to forfeit too much money relative to their opportunity costs. There’s obviously a limit, but on the margin don’t worry about this hurting you.
They ask for an org chart of responsibility. This seemed to mostly be pro forma so don’t sweat the details, just be accurate. Mostly it’s simply ‘who is in charge here’ and ‘who else is here at all.’
Then there’s the part where you need to make your case.
Most of the charities that I talked to were deeply confused about what to say here to make their case or what SFF wanted to see, and those are the ones that did apply anyway.
Presumably if you wouldn’t by default apply, you’re even more confused.
Part of this is that SFF has different recommenders each time, who interact differently, and thus is highly unpredictable.
Part of it is that what people say they care about and reward is often not what is actually rewarded.
There is a basic strategic decision to make. You need to decide on two request numbers, the Base Request and the Maximum Request, and on a plan for spending both amounts, and for your plan for Impact.
On top of that, they then ask for an unspecified but potentially huge amount of detail that you could reasonably not have ever thought about carefully or have had any good reason to think about, and which you’re worried could make or break your application. I got stressed purely thinking about someone else filling out these parts of the application.
They also ask for Spending Track Record, but that’s literally an approximation of how much you’ve spent this past year, so don’t sweat that, just make it approximately accurate.
They ask about ‘conspicuously absent funding sources,’ if your response to that is ‘what are these conspicuous funding sources, huh?’ then seems fine to note that you’re not familiar with the funding ecosystems this likely refers to, and you’d be happy to get advice on other good matches for what you’re doing.
I gave some ‘this is what seems like it works’ advice back in my previous post under ‘incentives of the S-process.’ All of that mostly still applies and here I will attempt to provide more clarity and detail where it seems useful.
It is all a holistic package, and should be thought about that way. You’re offering two deals for sale, the standard and the deluxe package. You want to charge cheerful prices for both, offer a clear value package for both, and make your simple case for impact in a way that will resonate. Ideally (in terms of getting approved) it also fits neatly into an existing approved both, but ideally (in terms of actually accomplishing something) it doesn’t and instead has a clear case for why we need an additional or bigger box.
The Impact Plan
On reflection, the best approach is indeed the natural one, which is to start with your impact plan and what you intend to do, and then choose the size of your request afterwards based on that.
These are plans for impact, more than they are plans for spending. The spending follows from the plan for impact. The spending part needs to be sane, and I talk about it under The Request in a bit, but the key is what you plan to accomplish, and why accomplishing that thing will advance survival and/or flourishing.
Here we need (if possible) to form two plans, the Basic Plan and the Deluxe Plan. If you count the general Organization Plan that’s independent of SFF, that’s three plans.
The Basic Plan and its corresponding Base Request is the reasonable we-can-do-the-thing-we-are-here-to-do plan. In some cases this naturally ‘wants’ to be a large plan, in which case go for it up to about the size of requests in previous rounds (so probably not more than $1.5mm or so).
If you go to the high end of that, your alternate plans for lesser amounts start to matter more, but even then don’t sweat the details here. SFF notes things that could easily drive you crazy, like ‘it is often helpful if you would note what you would do with 25%, 50% or 75% of your request.’ I do think you should provide the one-sentence version of this information, but unless it’s already written I wouldn’t do much more than that. Same goes in many other places. Those considering giving you half your request will want to know if this means you do basically the same thing but less of it, or something else entirely. Often the answer is ‘well then we’d have to spend more time fundraising’ and that seems fine.
Anyway, back to the plan. You should propose that you’ll Do a Thing of some kind, with a clear case for why the Thing will have Impact.
The above discussions should ideally have given you an idea of the kinds of Impacts are likely to count and go over well, and what aspects you’ll want to highlight. As usual with such things, it’s probably best to choose one story about Impact that you think is your best and clearly focus on it.
Plans that aren’t sure to succeed are totally fine. If the expected value play is to aim big, then aim big, the process on average is likely to reward this rather than punish it. At least one charity I talked to chose to talk about an unexciting but ‘shovel-ready’ thing they could reliably pull off rather than the more exciting but less ready things that I would have actually cared about them doing. Don’t make that mistake.
If you fit into one of the standard Impact boxes, great. If you don’t, it gets trickier. Again, the focus is on providing the fuel for one person to get excited, so don’t hedge your bets or try something convoluted to act like you fit into a box. Instead, if the actual reason isn’t a short-term thing, give the actual reason why one would care enough to do this, even if it sounds weird, and hope someone else also gets it. You’d be surprised, these are smart well-meaning people, they can get it sometimes.
If the actual reason is a short term thing, for example you have a charity to help with third-world poverty or disease, then you’ll have to set aside your core motivation and ask if the thing you’re doing is worth doing anyway.
Sometimes the answer is no, but often the answer is a strong and honest hell yes.
Doing a Thing is typically Insanely Great, and tends to have all sorts of good knock-on effects. Improving things often leads to more improvements and the collection of compound interest (although sometimes the surplus gets eaten by rent seekers and status expectation adjustments and hedonic treadmills, so it’s hard to know). It often leads to skilling up, and capacity to Do more and better Things, and learning about which Things can be Done and in what ways.
Is it likely to be the first best thing if you exclude your primary motivation? Well, no. Of course not. The good news is, that’s nothing like the bar you need to clear. The best thing to do, even for relaxed values of ‘best,’ is orders of magnitude better than the threshold required to make something worth doing. ‘Wasting’ most of your efforts is totally fine, as long as you point to the part that isn’t wasted. Most of my day is kind of wasted most of the time, that’s how it goes. Similarly, I still endorse that the world’s best charity remains Amazon.com, and it’s not because that was what Bezos set out to do.
The Impact Evidence
You’re asked to provide evidence that you’ve had impact in the past.
Mostly my model says ‘don’t sweat it’ but this is one place you should consider sweat a bit, if it would improve your answer, because people do care about this.
You’ll want to put your best and most legible foot forward, in ways recommenders are likely to care about. What have you done that matters? Either in terms of proving you can be impactful going forward, or that you’ve been impactful (to the far future) in the past.
Emphasis on the long-term nature of this argument. If you did something short-term that will carry over to the long-term, you’ll want to make the case for why that’s true. Otherwise, it will lead to a ‘oh this isn’t what we are looking for’ reaction.
The argument you make really does matter here, but remember that what matters is that one person gets excited, not that the argument works on everyone.
The more concrete and direct the impact, the better. The more it can be objective and verified, the better. The more it matches what you want to do next, the better.
Tell specific stories, specific wins. Individual people who you’ve impacted, and that went on to therefore accomplish something, are good. So are many other things.
Make sure that within this frame:
If they’re reading your published papers, they read your best paper. Same with blog posts or books or any other media.
If you’ve built software or invented concepts, show your best.
If you’ve helped people along their path, they know the best wins.
If you’ve had lots of conferences, that they know the most impressive relationships, connections, ideas and such that you know resulted.
And so on.
Also notice that you’re trying to show two things.
What you do matters.
You can execute (and ideally, you can scale) what you intend to do next.
This is a place to show both of these things.
The Request
There’s no point in asking for less than the cheerful price to execute your plan. If it’s ‘this is exactly enough to execute if we all pinch our pennies’ then there are times and places for that, but this is not one of them. If it’s worth doing, it’s worth doing right, with slack in the budget and less worry throughout the day about economizing when there’s important work to be done. The limiting factors on action are talent and ideas, so make the most of yours on both counts.
Also, the process rewards bigger requests over smaller requests, in terms of what you’re likely to end up getting. The process effectively anchors the final grant proportionally to the size of the request, because the curve is shaped a lot by how many dollars provide any value at all, and also due to basic anchoring.
Do keep the following words in mind, from the application:
There is no need to be particularly detailed in the upcoming spending and funding questions. If you have prepared documents for other uses, please feel free to include them as your response; it’s likely they’ll suffice for our purposes.
They really don’t want lots of detail here, in my experience. They want to have a good enough sense to make funding decisions. If they need the details, which is rare, they can follow-up and ask for them later.
Therefore, basic advice:
Ask as a base request for the amount to comfortably and properly continue your work, and execute on the core plan you’ve selected.
Ask for a generous maximum amount that lets you not worry about money for a while, but that wouldn’t be a burden or put you in danger of having Too Much Money, or put you under felt pressure to spend too much too fast.
Make sure the basic-request plan matches the basic-request quantity, but with reasonable slack involved.
Don’t sweat the budget details, this can be very general, and as long as you’re making sane decisions it’s not going to matter. All it’s effectively meant to do, other than a sanity check, is give a general sense of what the expensive parts are versus the cheap parts.
Remember that even the basic plan is allowed to have flexibility in it and not be over specified.
Remember that a common outcome is to get a non-trivial and useful amount of money that is importantly less than your base request, and another common outcome is to get importantly more.
The right maximum-amount plan does not involve specific plans for spending all the funds within the first two years.
Do make sure your requests will pass (generous) sanity checks.
When in doubt, be asking for general support, it’s what mostly happens anyway.
When in doubt, give less detail now, give it later if someone asks.
Mostly I believe this is true of life in general. People don’t ask for enough money, and are far too afraid they’re asking for too much or not offering enough in return, and they start giving power what they think it wants and offering lots of detailed plans and concessions and promises even unprompted.
Recently, I got asked to do a consulting job that wasn’t world-improving or resume-enhancing, so I quoted my zero-sum-game price, which is high, and they didn’t hire me, and I’m assuming that’s at least partly due to the price. If that doesn’t happen reasonably often when people ask about such tasks, you’re not asking for enough.
You can of course still overdo it, especially with the base request. If you ask for a number that’s implausibly high, you’ll likely be docked points and the anchor won’t stick. But at other times, people are looking for a way to give you a bunch of funding.
Make sure the base request amount and the plan you’re proposing to execute match up and live in the same world.
Invitation
Like Scott Aaronson, I’m inviting you to check with me, if you’re considering applying, to see how interested I’d be or my guess as to your chances, one way is to leave a comment here asking, including links to necessary info. Also happy to give specific additional thoughts on details.
If there’s a substantial chance on first pass I’d give you a speculation grant once you’ve applied, I’ll let you know, along with other quick thoughts.
If you need a soft commit before you can justify applying, I’ll see if I can give you one at the cost of being less likely to say yes (since I’d have to do it in advance).
Alas, there is a limit to how much time I can devote to this, especially given I will not be a recommender in this round. If I get flooded with questions and requests, I may not be able to give each my proper attention and would need to do triage, and I apologize for that in advance.
Long Term Charities: Apply for SFF Funding
Link post
In the last funding round of the Survival and Flourishing Fund (SFF), I was one of the recommenders whose evaluations helped distribute millions of dollars in charitable grants. I wrote up my experiences here.
Now that applications for the next round are open, I want to strongly encourage more charities whose work plausibly benefits the long term future, especially charities that do not interact much with the rationalist or effective altruist ecosystems and networks, to apply for funding. SFF has no official connection with the effective altruism movement (EA), aside from sometimes recommending grants to support EA-branded charities; and, many involved are actively seeking opportunities to support projects not connected with the EA social network.
You’ll need a 501(c) charity in order to apply. If you don’t have a 501(c) but would otherwise consider applying, it’s too late for this round but there will be others, so consider getting a 501(c).
The application is not as quick and easy as it could be (I’m hoping to get that to change) but it’s not bad. Filling out such applications can be daunting, and requires much gumption. But the expected value of doing so, if you have any kind of plausible case, is very high. Your odds are remarkably good. There is a lot of money being distributed here, and a lot of money being distributed elsewhere, and a shortage of known worthy causes to which to distribute the money.
You could even get immediate gratification in the form of a speculation grant. A number of people, including myself and also Scott Aaronson, have been given $200k budgets to give out as we see fit. He’s inviting quick pitches for this. If you want to pitch me, the comments here are the place to do it, and I’ll give a quick estimate of how likely I would be to give the organization a speculation grant if they apply. My bar for saying yes if I don’t already know you will be high, but not infinitely high. Try to do this within the week.
This post is an attempt to help charities decide whether applying is worthwhile. Not every cause will be a good fit.
It also aims to tell them how, in my model, to ensure that if they should be funded, they get funded.
And most importantly, it should help you not stress so much and not spend as much time on the application, so it happens at all.
General disclaimer: This is me talking, and I have zero official capacity, and those who make the decisions would likely disagree with some of this, and some of this is probably lousy advice if it were for some bizarre reason taken as advice, and (to cover every possible legal base) this is not any kind of advice.
Jaan’s Philanthropic Priorities
Jaan Tallinn is currently the primary funder of the S-process. He (by design) doesn’t have any direct say in the allocation, but the process reflects his priorities, which you can read here.
This is the headline information.
The primary purpose of my philanthropy is to reduce existential risks to humanity from advanced technologies, such as AI. I believe that this cause is (as per the ITN framework):
important: existential risks are fundamentally important almost by definition — as far as I know, all other philanthropic causes assume continued existence of humanity as their prerequisite;
tractable: since scientific and technological progress is (for better or worse) concentrated in relatively few institutions, I believe it is more tractable than more “diffuse” problems, such as global warming. furthermore, having worked on this topic since 2009, I feel I have a comparative advantage in identifying productive initiatives in this field;
neglected: despite more than a decade of robust growth, still a disproportionately little amount of resources go to reducing existential risks.
therefore, I’m especially likely to support initiatives that (ideally explicitly, but see below) address existential risks. conversely, I’m likely to pass on all other opportunities — especially popular ones, like supporting education, healthcare, arts, and various social causes. Importantly, this is not a vote against such causes, many of which are very important! It’s just that they can be hard for me to support without reducing my attention on existential risks.
Do not let this information overly discourage you. The s-process does care more about things that are existential risks to humanity, as opposed to things that are not, but that’s because they tend to matter more to our long term future. If we’re all dead we don’t have much of a long term future.
The goal is long term survival and flourishing (hence the name, ‘survival and flourishing fund.’)
Each recommender will have a different model of what is likely to make that happen.
This does mean the fund mostly ignores short term flourishing, except as it influences the long term. There isn’t enough potential marginal flourishing in the near term for its own sake. Short term survival matters if it is required for long term survival, hence the focus on existential risks.
How and how much do short term survival and flourishing, of various types, and other short term effects, impact the long term? That’s a very interesting question where opinions often differ.
I encourage anyone who is curious to read my previous report. Note that my sense of ‘what matters for the long term’ differs from that of others, and each round will have different recommenders. That’s all the more reason to take a chance on a non-obvious theory of long term impact in terms of applying at all, including multiple times when you fail, but an argument against counting on such an argument working.
The Logic Behind Funding
Remember that it only takes one recommender to get funded. It’s about getting someone excited, not getting consensus.
People are looking to get excited. Many of us want to find a unique proposal for improving the long term future that we hadn’t considered before, even if it’s indirect or involves a bunch of f***ing around and finding out.
That opens things up quite a lot. Almost anything that isn’t a standard-charity-action and has a plausible impact argument has some chance. Even without me or my particular thinking (the focus on the Doing of the Thing) involved, SFF has in the past funded charter cities, a parliamentary group for future generations, a learning website for basic knowledge, progress studies and systems for general basic scientific research.
Those don’t obviously impact existential risks, so there’s clearly room for other things.
As a toy model, one can divide long-term effects into two broad categories.
Ability to Do various Things (Capabilities).
Existential risk due to the people Doing all these Things (Risks).
We want to increase those capabilities that tend to lead to life getting better for humans, while decreasing risk.
This is tricky, as many capabilities (most importantly AI capability) by default directly increase risk. Yet one of the Things capability lets you Do is decreasing existential risk. Only sometimes (e.g. ‘AI Safety’) is this explicitly labeled as such. There is disagreement about this, but I strongly believe, and I believe that most (but not all) potential recommenders believe, that without sufficient general Capabilities increases, survival and flourishing will be impossible.
Not all capabilities are created equal.
So the question is, which capabilities are you increasing? Are you differentially increasing Capabilities versus increasing Risk, by making progress in ways that make us safer and more generally capable faster than the ways that introduce, amplify and accelerate the risks?
The central example of this question is an ongoing debate as to whether, in this context, accelerating scientific progress is a good or bad idea, or how much additional differentiation is required before it becomes a clearly good idea. Last round, the debate seemed conclusively resolved in favor of basic research being good without any need to focus on existential risk, and worth funding, with the only question being whether the ability was present to turn money into basic research.
Advancement of thinking is similar. More people who better understand the world, or who are better at thinking and figuring things out (in order words, actual education) seems clearly good. The issue is that ‘education’ as a broad field is clearly not neglected, and most funding wouldn’t do much, but targeted efforts, especially software based and those orthogonal to any credentials, are more interesting. If I didn’t think that more learning how to think and more understanding and modeling the world was an important job worth funding, this blog wouldn’t exist.
The other exceptions I found are similar.
Charter cities are potentially another example of low-hanging capabilities fruit. I didn’t fund because the particular effort seemed like catch-up growth rather than an attempt to do something unique, so it fell into the short-term bucket, but a more ambitious project that would have enabled new and unique effective freedom of action, would have gotten me to fund, and one other recommender in the past has gotten to a yes.
It surprised me to find that a parliamentary group got funded, but there’s no question that governmental dysfunction, both in the context of AI risk and in other contexts, is a problem worth addressing. The problem is that proposals tend to be unconvincing and adversarial, which are tough bars to clear in a context like SFF. Game theory and decision theory are places where improvements are clearly valued and proposals often funded, so this might not be that big a stretch.
The most obvious and important exception to capabilities defaulting to good is AI. In the realm of AI, capabilities advancement by default creates risk. Existential risks from AI are the most important cause area, but also the one where one must use the most caution, because default actions in this space tend to effectively be ‘increase capability’ and end up increasing and accelerating risk. Thus, anything one does surrounding AI has to consider its impact on the timeline for AI capabilities development, which is much easier to accelerate than it is to slow down.
There are also other places (such as weaponry) where capabilities default to not being a good idea, but collectively they are the exception.
Meta things can definitely get funded.
This can include regrants to places too local and/or small for SFF. Long-Term Future Fund got a large grant last time as did the EA Infrastructure Fund. Some explicit focus on x-risk definitely helps your cause, but isn’t obviously required. As a forward-looking example, if Scott Alexander applied for more money for ACX grants, I don’t agree with all of his choices, but overall I was impressed, and that would have crossed my ‘worth funding’ threshold.
This can also extend to attempts to build up the intellectual and community infrastructures that are seen as enabling of existential-risk prevention or are otherwise seen as valuable. LessWrong has gotten support in the past, and got support in the last round as the more ambitious Lightcone Infrastructure to give logistical support and coordination to those working on existential risk rather than working on it themselves directly.
List of Things That Definitely Might Get Funded, If Your Proposal Is Good Enough
There’s never any guarantees here, but a history of a similar proposal being accepted is good, as is knowing some of the thinking taking place by at least one recommender. I’ll start with the obvious x-risks and go from there, and give my quick view on each.
Existential Risk: AI
This is the obvious and most important cause. The key is making people confident you are Helping rather than Not Helping, not that you’re being the most efficient possible. It’s easy to end up doing capabilities research, or to waste time tackling problems too small to matter.
Existential Risk: Biosecurity and Pandemics
There hasn’t been funding for these in past rounds, but I’m confident that’s because there haven’t been good proposals, and in the last round there were no proposals at all. If you have a good proposal here, people will be inclined to listen. Note that the focus of many here is on existential risks and to discount lesser risks, but I expect that to loosen somewhat in light of recent events.
In particular, I’d like to see proposals to stop or slow down Gain of Function research, given what we have recently learned about it.
Existential Risk: Nuclear War
The problem with the biosecurity proposals has been their non-existence, whereas the problem with the nuclear war proposals is that they’ve mostly been terrible. ALLFED’s proposal to ensure survival in the aftermath of a potential war was the clear exception, along with things like dealing with EMP effects. Everything else was essentially ‘raising awareness’ and there’s little appetite for that. If you can find a way to actually help with this, there’s definitely people interested, but everyone already knows to try and avoid launching nukes.
Existential Risk: Asteroid Strikes, Rogue Comets, Solar Flares, Et Al
Consider this a grab bag for ‘things that have some low background probability of happening, that we’re in no position to prevent, and would be quite bad.’ A very low probability of something sufficiently bad is still worth mitigating or preventing, if you can actually do that and the result checks out when one Shuts Up and Multiplies. Do Look Up, but the ability to potentially change the outcome is the reason to fund a looking-up program. A sufficiently good idea here would be most welcome, unclear the height of the bar. Same for other risks I’m not thinking about that are unlikely but real, even if they don’t fall directly into this category.
Existential Risk: Climate Change and Geoengineering
People who take AI Risk seriously tend to be highly dismissive of focusing on climate change. There’s no doubt it’s happening or that we’d prefer to prevent or mitigate it, but it’s the opposite of neglected. Even if you buy for the sake of argument the full-blown ‘we’re literally all going to die’ story, there’s tons of attention and money already going into it. Yet most of it doesn’t seem all that concerned with being efficient or effective at actually solving the problem, and what seem to me like the obvious things-you-would-do-if-you-wanted-to-actually-solve-this mostly aren’t happening. Also I’ve come around to ‘this is sufficiently damaging to everyone’s psyches and sense of a future that it’s worth solving just to deal with that alone.’ So there’s potential room for carefully targeted ideas that are orders of magnitude more actually likely to do something useful but are having trouble getting other support, if you’ve got one of those, and I’m guessing they exist.
Basic Research, especially Science and Mathematics, also Progress Studies
There’s widespread belief among the reference class of potential recommenders that our current academic and research institutions are badly broken, and we need alternatives that can replace them or at least complement them. Advancements in Mathematics seem likely to differentially help AI Safety over AI capability, because they favor things that it is easier to understand and to prove things about over things where that is harder. If the mathematics in question is targeted to be relevant, that can be a big game in expectation.
For science getting basic science back on track and encouraging promising talent to work on it and ensure they have the resources necessary seems clearly good if it can scale properly.
Scaling up (or continuing) existing successful research efforts is one form of low-hanging fruit here, if you have a track record and the thing capable of scaling.
The more you can make the case that the things you’re trying to figure out or enable are helpful to people trying to think like people and understand how things work, and gives person-thinking an advantage against AI-machine-learning-pseudo-thinking, the better.
I’d also note that there’s a decent amount of appetite for a broadly construed version of what Tyler Cowen calls Progress Studies. Knowing how scientific and other progress happens is valuable.
Research Tools and Educational Tools
A number of things in this class have been funded, with the goal of making work acquiring new knowledge more efficient. Our current methods are definitely highly inefficient throughout, and there’s a lot of room for the right not-that-hard-to-make software to make a big difference, potentially involving AI the way Ought is looking to use GPT. There’s potentially huge leverage available, and proven interest.
I continue to worry about the danger of such things differentially enabling and advantaging machine-learning-style-pseudo-thinking in various ways, so if I were evaluating such a project details of that would be important. See my previous post for more details of these questions in the context of Ought.
New Evaluation Methods: Scientific, Technical and Beyond
Current evaluation methods are haphazard and quite poor across the board. If one could figure out something better, that would be very interesting.
Technical and scientific evaluations were the motivating example, but evaluations of all sorts of other things also seem very useful.
Epistemic (and Strategic) Mapping
People believe a lot of strange things across the board. Those who are focusing on rationalist or altruist spaces, and/or on existential risks are not only no exception, they often think and believe things that are even weirder than usual. One problem is that there’s even more of a clash than usual between what feels socially positive, natural, sane or superficially cooperative or enabling of normal navigation of life, and the actual problem space that determines whether goals get accomplished and the physical world ends up in a configuration we prefer. When the things we’re thinking about or working towards seem disconnected from ordinary life, what we believe intellectually and what we believe in our gut can easily diverge, leading to unproductive and contradictory decisions. The problems are often sufficiently abstract that there’s plenty of room to talk oneself into whatever conclusions, timelines and decisions one would unconsciously like to talk oneself into, and to adjust justifications when one’s stated justifications get falsified rather than updating the conclusion. Meanwhile there’s a lot of money running around (often, as I noted in my previous post, Too Much Money) and that too is massively distortionary.
That’s a mouthful of words that add up to a need to a pressing need to discover what people thinking about the problems that matter actually believe and why, how and to what extent they believe it, and getting that into common knowledge.
Similar work on more practical questions and more ordinary people would also be valuable. Our current methods for knowing what people believe are rather terrible, none of (elites, experts, regular people) have any clue what the other two think or even really what their own group thinks, same with ingroup and outgroup, and so on. If there’s a path to working on that, it would be great.
Real World Experiments in Doing Thing
If there’s a lack of worthwhile things to do with the resources available, then exploring to find potential new worthwhile things and how to execute on them, or learning more about how to execute in general, is high value. That seems to be the world we live in. There’s a remarkable lack of knowledge and skill about how to go from ‘we should do X’ to X being done, even for relatively simple values of X, and it is entirely missing for many potential X that could if successful scale up.
Thus, if there’s a potentially good idea and the will to attempt to execute on that idea, even if the direct impact of that idea is short-term, the impact of learning how to execute ideas translates to the long term. The more one can gather data on the effort and commit to reporting back on what happened, the more useful it can be.
The more such an effort can scale or generalize, the more interesting it is. This can be a test of social technology, or of engineering, and ideally there’s big payoff in the thing itself. As an example, investigating more efficient construction methods both physically and potentially socially (centrally of housing but also other things, and including considering functional and tiny houses) and how to do such things cheaply seems increasingly high value to me while having strong concreteness, or potentially box agriculture. Problem doesn’t have to be that central, I funded Euro Biostasis for being essentially in this class.
As an example of another type of such things, I was highly disappointed by the lack of effort by people in my social circles to take non-super-local action regarding Covid-19 over the past two years, as it provided an excellent laboratory for this, both for learning and training and assembling teams and relationships and for demonstrating ability, as well as being a unique opportunity for impact. That window is likely closing rapidly, but other similar ones are doubtless opening.
Community Support
Obviously this can’t simply be support for any community. So this is where outsiders are going to be, well, on the outs. This has to be community support for the communities that in turn support the causes that excite people. That means those who have a lot of focus on long term concerns including existential risk, and on the things that will enable such focus and for such focus to succeed. This includes, but is not limited to, the Effective Altruist and Rationalist movements and communities.
There’s a broad range of degrees of willingness to engage in this sort of activity, and additionally broad range of disagreement over which types of thing are worth doing. There’s a bunch of different forms of potential support.
I for one am, as I discussed in the previous post, strongly opposed to efforts that are focused on helping these communities and movements acquire more power and money.
I am more optimistic about providing logistical support for individual members, either small support in their formative years to get on their feet and pursue high-upside options without locally binding constraints, or for richer logistical support for those doing the impactful work. Potential conflicts of interest, the danger of nepotism and people potentially getting involved for the wrong reasons make this tricky.
I am also optimistic about richer logistical support for those supporting the communities, and for the communities themselves, via the provision of public goods or paying costs of coordination, but would warn it has a poor track record getting funded so far. And of course all the potential conflicts of interest are still present, and I am no doubt biased.
But I believe that physical location is a big game, in terms of helping get the right people to the right places, with co-location with other right people, including both co-location of housing and also having places to work, play, gather and naturally interact.
Other High Leverage Stuff
This is not meant to be a complete list even of the things that are well-positioned, and it’s definitely not a complete list of the things that are worth doing or worth funding in the world. If there’s an approach with sufficiently high expected impact and leverage, people stand ready to listen, and even if you don’t get major funding it will at least make smart people think about your approach, maybe come up with some ideas or even do something themselves, and hopefully at least give you some useful feedback. There’s a broad sense that bold new different things would be good.
If you’re considering something political, it will need to be a ‘pull the rope sideways’ approach to pursue a policy, or help for those pursuing influence on policy in a long term sphere. Helping the ingroup best the outgroup is right out, as is either side of every hot-button political issue I can think of right now.
Help for the developing world is a long shot, since that’s not explicitly not what the fund is looking to do and is not neglected by similar funding sources elsewhere, but it is interesting if a case can be made that it differentially helps with the causes listed in this section, as discussed in the logic section above.
My Particular Priorities
A major part of my model is that we don’t have enough People Doing Thing, for meaningful values of any of People, Doing and Thing. If we had more People Doing Thing, that would be great, and we need a cultural shift towards the belief that Person who thinks there’s an important Thing to Do should by default go and Do that Thing. That’s not a central viewpoint, but similar views are held by several others who are potential recommenders, so there’s a good chance someone with at least some of this view is in the round.
I’m a big fan of taking people and organizations capable of Doing real action Thing and interested in Doing real actual Thing and enabling them to Do real actual Thing, and for freeing them from constraints that would prevent this.
I’d consider that area my ‘comparative advantage’ in terms of what I’m likely to give a speculation grant to versus others who give such grants. It will still take a strong case to get me interested, but it can be done and has been done. Unfortunately, my bar for a speculation grant has to be higher than for funding in the round, if I don’t expect others to agree, due to the way it is structured.
I’m very interested in existential risk from AI but so are half the others with speculation grants, but one note is that I’m relatively excited by non-ML approaches relative to most others.
I’m interested in community support that involves physical locations and coordination, but again I am skeptical of ability to get funding and thus it would be tough to justify the grant (since it will likely get evaluated poorly).
SFF Application
Let’s take a look at the application including its long form to see what’s involved in trying. A lot of it is basic information that is presumably already at your fingertips (for example, the organization’s name) so I’ll skip that stuff.
One piece of advice up front is to think about the links you’re providing, and what recommenders are likely to see when they’re looking at you and asking what you’re about. Ensure they can get the information you need them to get.
First there’s the ‘non-strategic’ stuff that doesn’t involve making any decisions.
There’s the whole thing about the receiving charity and showing that you can logistically accept the funds. I don’t know how annoying that is on the charity’s end, but it is what it is. Make sure it’s handled.
They ask for compensation information. It is traditional for funders to want to keep salaries down and be suspicious of higher ones, but I no longer believe this is the case. If something is worth funding, it’s worth paying the people involved enough to have a decent standard of living, and to not ask them to forfeit too much money relative to their opportunity costs. There’s obviously a limit, but on the margin don’t worry about this hurting you.
They ask for an org chart of responsibility. This seemed to mostly be pro forma so don’t sweat the details, just be accurate. Mostly it’s simply ‘who is in charge here’ and ‘who else is here at all.’
Then there’s the part where you need to make your case.
Most of the charities that I talked to were deeply confused about what to say here to make their case or what SFF wanted to see, and those are the ones that did apply anyway.
Presumably if you wouldn’t by default apply, you’re even more confused.
Part of this is that SFF has different recommenders each time, who interact differently, and thus is highly unpredictable.
Part of it is that what people say they care about and reward is often not what is actually rewarded.
There is a basic strategic decision to make. You need to decide on two request numbers, the Base Request and the Maximum Request, and on a plan for spending both amounts, and for your plan for Impact.
On top of that, they then ask for an unspecified but potentially huge amount of detail that you could reasonably not have ever thought about carefully or have had any good reason to think about, and which you’re worried could make or break your application. I got stressed purely thinking about someone else filling out these parts of the application.
They also ask for Spending Track Record, but that’s literally an approximation of how much you’ve spent this past year, so don’t sweat that, just make it approximately accurate.
They ask about ‘conspicuously absent funding sources,’ if your response to that is ‘what are these conspicuous funding sources, huh?’ then seems fine to note that you’re not familiar with the funding ecosystems this likely refers to, and you’d be happy to get advice on other good matches for what you’re doing.
I gave some ‘this is what seems like it works’ advice back in my previous post under ‘incentives of the S-process.’ All of that mostly still applies and here I will attempt to provide more clarity and detail where it seems useful.
It is all a holistic package, and should be thought about that way. You’re offering two deals for sale, the standard and the deluxe package. You want to charge cheerful prices for both, offer a clear value package for both, and make your simple case for impact in a way that will resonate. Ideally (in terms of getting approved) it also fits neatly into an existing approved both, but ideally (in terms of actually accomplishing something) it doesn’t and instead has a clear case for why we need an additional or bigger box.
The Impact Plan
On reflection, the best approach is indeed the natural one, which is to start with your impact plan and what you intend to do, and then choose the size of your request afterwards based on that.
These are plans for impact, more than they are plans for spending. The spending follows from the plan for impact. The spending part needs to be sane, and I talk about it under The Request in a bit, but the key is what you plan to accomplish, and why accomplishing that thing will advance survival and/or flourishing.
Here we need (if possible) to form two plans, the Basic Plan and the Deluxe Plan. If you count the general Organization Plan that’s independent of SFF, that’s three plans.
The Basic Plan and its corresponding Base Request is the reasonable we-can-do-the-thing-we-are-here-to-do plan. In some cases this naturally ‘wants’ to be a large plan, in which case go for it up to about the size of requests in previous rounds (so probably not more than $1.5mm or so).
If you go to the high end of that, your alternate plans for lesser amounts start to matter more, but even then don’t sweat the details here. SFF notes things that could easily drive you crazy, like ‘it is often helpful if you would note what you would do with 25%, 50% or 75% of your request.’ I do think you should provide the one-sentence version of this information, but unless it’s already written I wouldn’t do much more than that. Same goes in many other places. Those considering giving you half your request will want to know if this means you do basically the same thing but less of it, or something else entirely. Often the answer is ‘well then we’d have to spend more time fundraising’ and that seems fine.
Anyway, back to the plan. You should propose that you’ll Do a Thing of some kind, with a clear case for why the Thing will have Impact.
The above discussions should ideally have given you an idea of the kinds of Impacts are likely to count and go over well, and what aspects you’ll want to highlight. As usual with such things, it’s probably best to choose one story about Impact that you think is your best and clearly focus on it.
Plans that aren’t sure to succeed are totally fine. If the expected value play is to aim big, then aim big, the process on average is likely to reward this rather than punish it. At least one charity I talked to chose to talk about an unexciting but ‘shovel-ready’ thing they could reliably pull off rather than the more exciting but less ready things that I would have actually cared about them doing. Don’t make that mistake.
If you fit into one of the standard Impact boxes, great. If you don’t, it gets trickier. Again, the focus is on providing the fuel for one person to get excited, so don’t hedge your bets or try something convoluted to act like you fit into a box. Instead, if the actual reason isn’t a short-term thing, give the actual reason why one would care enough to do this, even if it sounds weird, and hope someone else also gets it. You’d be surprised, these are smart well-meaning people, they can get it sometimes.
If the actual reason is a short term thing, for example you have a charity to help with third-world poverty or disease, then you’ll have to set aside your core motivation and ask if the thing you’re doing is worth doing anyway.
Sometimes the answer is no, but often the answer is a strong and honest hell yes.
Doing a Thing is typically Insanely Great, and tends to have all sorts of good knock-on effects. Improving things often leads to more improvements and the collection of compound interest (although sometimes the surplus gets eaten by rent seekers and status expectation adjustments and hedonic treadmills, so it’s hard to know). It often leads to skilling up, and capacity to Do more and better Things, and learning about which Things can be Done and in what ways.
Is it likely to be the first best thing if you exclude your primary motivation? Well, no. Of course not. The good news is, that’s nothing like the bar you need to clear. The best thing to do, even for relaxed values of ‘best,’ is orders of magnitude better than the threshold required to make something worth doing. ‘Wasting’ most of your efforts is totally fine, as long as you point to the part that isn’t wasted. Most of my day is kind of wasted most of the time, that’s how it goes. Similarly, I still endorse that the world’s best charity remains Amazon.com, and it’s not because that was what Bezos set out to do.
The Impact Evidence
You’re asked to provide evidence that you’ve had impact in the past.
Mostly my model says ‘don’t sweat it’ but this is one place you should consider sweat a bit, if it would improve your answer, because people do care about this.
You’ll want to put your best and most legible foot forward, in ways recommenders are likely to care about. What have you done that matters? Either in terms of proving you can be impactful going forward, or that you’ve been impactful (to the far future) in the past.
Emphasis on the long-term nature of this argument. If you did something short-term that will carry over to the long-term, you’ll want to make the case for why that’s true. Otherwise, it will lead to a ‘oh this isn’t what we are looking for’ reaction.
The argument you make really does matter here, but remember that what matters is that one person gets excited, not that the argument works on everyone.
The more concrete and direct the impact, the better. The more it can be objective and verified, the better. The more it matches what you want to do next, the better.
Tell specific stories, specific wins. Individual people who you’ve impacted, and that went on to therefore accomplish something, are good. So are many other things.
Make sure that within this frame:
If they’re reading your published papers, they read your best paper. Same with blog posts or books or any other media.
If you’ve built software or invented concepts, show your best.
If you’ve helped people along their path, they know the best wins.
If you’ve had lots of conferences, that they know the most impressive relationships, connections, ideas and such that you know resulted.
And so on.
Also notice that you’re trying to show two things.
What you do matters.
You can execute (and ideally, you can scale) what you intend to do next.
This is a place to show both of these things.
The Request
There’s no point in asking for less than the cheerful price to execute your plan. If it’s ‘this is exactly enough to execute if we all pinch our pennies’ then there are times and places for that, but this is not one of them. If it’s worth doing, it’s worth doing right, with slack in the budget and less worry throughout the day about economizing when there’s important work to be done. The limiting factors on action are talent and ideas, so make the most of yours on both counts.
Also, the process rewards bigger requests over smaller requests, in terms of what you’re likely to end up getting. The process effectively anchors the final grant proportionally to the size of the request, because the curve is shaped a lot by how many dollars provide any value at all, and also due to basic anchoring.
Do keep the following words in mind, from the application:
They really don’t want lots of detail here, in my experience. They want to have a good enough sense to make funding decisions. If they need the details, which is rare, they can follow-up and ask for them later.
Therefore, basic advice:
Ask as a base request for the amount to comfortably and properly continue your work, and execute on the core plan you’ve selected.
Ask for a generous maximum amount that lets you not worry about money for a while, but that wouldn’t be a burden or put you in danger of having Too Much Money, or put you under felt pressure to spend too much too fast.
Make sure the basic-request plan matches the basic-request quantity, but with reasonable slack involved.
Don’t sweat the budget details, this can be very general, and as long as you’re making sane decisions it’s not going to matter. All it’s effectively meant to do, other than a sanity check, is give a general sense of what the expensive parts are versus the cheap parts.
Remember that even the basic plan is allowed to have flexibility in it and not be over specified.
Remember that a common outcome is to get a non-trivial and useful amount of money that is importantly less than your base request, and another common outcome is to get importantly more.
The right maximum-amount plan does not involve specific plans for spending all the funds within the first two years.
Do make sure your requests will pass (generous) sanity checks.
When in doubt, be asking for general support, it’s what mostly happens anyway.
When in doubt, give less detail now, give it later if someone asks.
Mostly I believe this is true of life in general. People don’t ask for enough money, and are far too afraid they’re asking for too much or not offering enough in return, and they start giving power what they think it wants and offering lots of detailed plans and concessions and promises even unprompted.
Recently, I got asked to do a consulting job that wasn’t world-improving or resume-enhancing, so I quoted my zero-sum-game price, which is high, and they didn’t hire me, and I’m assuming that’s at least partly due to the price. If that doesn’t happen reasonably often when people ask about such tasks, you’re not asking for enough.
You can of course still overdo it, especially with the base request. If you ask for a number that’s implausibly high, you’ll likely be docked points and the anchor won’t stick. But at other times, people are looking for a way to give you a bunch of funding.
Make sure the base request amount and the plan you’re proposing to execute match up and live in the same world.
Invitation
Like Scott Aaronson, I’m inviting you to check with me, if you’re considering applying, to see how interested I’d be or my guess as to your chances, one way is to leave a comment here asking, including links to necessary info. Also happy to give specific additional thoughts on details.
If there’s a substantial chance on first pass I’d give you a speculation grant once you’ve applied, I’ll let you know, along with other quick thoughts.
If you need a soft commit before you can justify applying, I’ll see if I can give you one at the cost of being less likely to say yes (since I’d have to do it in advance).
Alas, there is a limit to how much time I can devote to this, especially given I will not be a recommender in this round. If I get flooded with questions and requests, I may not be able to give each my proper attention and would need to do triage, and I apologize for that in advance.