Long Term Future Fund applications open until June 28th
The Long Term Future just reopened its applications. You can apply here:
Apply to the Long Term Future Fund
We will from now on have rolling applications, with a window of about 3-4 months between responses. The application window for the coming round will end on the 28th of June 2019. Any application received after that will receive a response around four months later during the next evaluation period (unless it indicates that it is urgent, though we are less likely to fund out-of-cycle applications).
We continue to be particularly interested in small teams and individuals that are trying to get projects off the ground, or that need less money than existing grant-making institutions are likely to give out (i.e. less than ~$100k, but more than $10k, since we can’t give grants below $10k). Here are some concrete examples:
To spend a few months (perhaps during the summer) to research an open problem in AI alignment or AI strategy and produce a few blog posts or videos on their ideas
To spend a few months building a web app with the potential to solve an operations bottleneck at x-risk organisations
To spend a few months up-skilling in a field to prepare for future work (e.g. microeconomics, functional programming, etc)
To spend a year testing an idea that has the potential to be built into an org
You are also likely to find reading the writeups of our past grant decisions valuable to help you decide whether your project is a good fit:
What kind of applications can we fund?
After last round, CEA clarified what kinds of grants we are likely able to make, which includes the vast majority of applications we have received in past rounds. In general you should err on the side of applying, since I think it is very likely we will be able to make something work. However, because of organizational overhead we are more likely to fund applications to registered charities and less likely to fund projects that require complicated arrangements to be compliant with charity law.
For grants to individuals, we can definitely fund the following types of grants:
Events/workshops
Scholarships
Self-study
Research project
Content creation
Product creation (eg: tool/resource that can be used by community)
We will likely not be able to make the following types of grants:
Grantees requesting funding for a list of possible projects
In this case, we would fund only a single project of the proposed ones. Feel free to apply with multiple projects, but we will have to reach out to confirm a specific project.
Self-development that is not directly related to community benefit
In order to make grants the public benefit needs to be greater than the private benefit to any individual. So we cannot make grants that focus on helping a single individual in a way that isn’t directly connected to public benefit.
If you have any questions about the application process or other questions related to the funds, feel free to submit them in the comments. You can also contact me directly under (ealongtermfuture@gmail.com).
Have you thought about what the currently most neglected areas of x-risk are, and how to encourage more activities in those areas specifically? Some neglected areas that I can see are:
metaphilosophy in relation to AI safety
economics of AI risk
human-AI safety problems
better coordination / exchange of ideas between different groups working on AI risk (see this question and I have a draft post about this)
Maybe we do need some sort of management layer in x-risk, where there’s some people who specialize in looking at the big picture and saying “hey, here’s an opportunity that seems to be neglected, how can we recruit more people to work on it?” instead of the current situation where we just wait for people to notice such opportunities on their own (which might not be where their comparative advantage lies) and then applying for funding. Maybe this management layer is something that LTFF could help fund, or organize, or grow into (since you’re already thinking about similar issues while making grant decisions)?
Second question is, do you do post-evaluations of your past grants, to see how successful they were?
(Edit: Added links and reformatted in response to comments.)
Yeah, I have a bunch of thoughts on that. I think I am hesitant about a management layer for a variety of reasons, including viewpoint diversity, corrupting effects of power and people not doing super good work if they are told what to do vs. figuring out what to do themselves.
My current perspective on this is that I want to solicit what projects are missing from the best people in the field, and then do public writeups for the LTF-Fund where I summarize that and also add my own perspective. Trying to improve the current situation on this axis is one of the big reasons why I am investing so much time on writing up things for the LTF-Fund.
Re. second question: I expect I will do at least some post-evaluation, but probably nothing super formal, mostly because of time-constraints. I wrote some more things in response to the same question here.
Perhaps it’d be useful if there was a group that took more of a dialectical approach, such as in a philosophy class? For example, it could collect different perspectives on what needs to happen for AI to go well and try to help people understand the assumptions underlying the project they are considering being valuable.
Can you say more about what you mean by I. metaphilosophy in relation to AI safety? Thanks.
How strongly do you think improving human meta-philosophy would improve computational meta-philosophy?
Minor – I found the formatting of that comment slightly hard to read. Would have preferred more paragraphs and possibly breaking the the numbered items into separate lines.
Have there been explicit requests for web apps that have may solve an operations bottleneck at x-risk organisations? Pointers towards potential projects would be appreciated.
Lists of operations problems at x-risk orgs would also be useful.
I am actually not a huge fan of the “operations bottleneck” framing, and so don’t really have a great response to that. Maybe I can write something longer on this at some point, but the very short summary is that I’ve never seen the term “operations” used in any consistent way, and instead I’ve seen it refer to a very wide range of skillsets of barely-overlapping skillsets that are often very high-skill tasks that people hope to find a person for who is both willing to work with very little autonomy and with comparably little compensation.
I think many orgs have very concrete needs for specific skillsets they need to fill and for which they need good people, but I don’t think there is something like a general and uniform “operations skillset” missing at EA orgs, which makes building infrastructure for this a lot harder.
I made a Question for this on the EA forum. https://forum.effectivealtruism.org/posts/NQR5x3rEQrgQHeevm/what-new-ea-project-or-org-would-you-like-to-see-created-in